We have a random number generator, but we do not have random art. (Mezei)
Computer art began with technical people who had access to computers and computer graphics equipment. They discovered that pleasing results can be achieved when a family of mathematical curves is plotted. Effects similar to Lissajous curves could be generated on cathode-ray tubes or by means of pendulums, including moire effects when two grids of curves are overlaid. The results are the precise and regular shapes we have come to call mathematical.
Many computer applications involve the use of random numbers generated within the computer. The two most common distributions of such random numbers are the so-called rectangular or uniform, where the likelihood of obtaining any particular number within the allowed range is the same as the likelihood of any other number (fig 1); and the normal or Gaussian distribution, where the nearer to the average a number is, the more likely it is to be chosen (fig 2). In the illustrations, these distributions are shown by means of spectral lines, 100 lines over a 10-inch base. With the rectangular distribution, the density of lines is the same, on the average, along any part of the base, similar to the disposition of blades of grass on a lawn, pebbles on a gravel road, or leaves on a dense bush. With the normal distribution, the lines are most dense near the centre (the average), thinning out towards the two sides. Were we to redraw these figures a large number of times, using different sequences of random numbers (of the same distribution) each time, the results would look very similar, though no two drawings would be identical.
From the very beginning, these random numbers played an essential role in computer art experiments, helping to avoid the monotony of regularly regimented spacing of the images. A large number of straight lines, or squares, or circles, can be dispersed within the frame in a random manner, allowing the size, the position and the orientation of the individual figures to be determined by chance. This is the general concept of many of the earliest computer art products.
Regularity, predictability, unity, uniformity, are considered pleasing; complexity, irregularity, unpredictability, variation, are thought to be interesting. Many definitions of art refer to unity within complexity. The element implicit in using the computer for making designs, is that all the decisions have to be made specific and explicit, which makes them open to study. By its very nature, computer art, becomes a part of generative aesthetics', as discussed by Max Bense.
We have a random number generator, but we do not have random art. Many design decisions have to be made from the very beginning, such as the choice of squares or straight lines among all the possible two-dimensional figures, or the size and shape of the frame. These and other constraints impose the unity, the overall structure, or in the terminology of information aesthetics, the macro-aesthetics of the picture. Were we to repeat the experiment any number of times we would get a large number of variations, some more interesting than others, but the overall structure would remain recognizably the same. The program embodies the generating rules, or the algorithm, which with random selection is capable of producing an infinite variety of similar pictures. The creativity is in the choice of the algorithm, not in the grinding out of the variations, which represent the micro-aesthetics of a picture.
The program could be so complex that its designer would be unable to foretell the kind of pictures that it would produce. However, after a few are actually realized, the potential of the program becomes quite clear. Most computer pictures which we see are chosen by their maker, from a great number which are actually produced. Unfortunately, the measures of aesthetic values, such as those proposed by G D Birkhoff, are not yet accurate enough to enable us to include in the program the criteria for selecting the most pleasing of our variations.
The programmer-artist goes one step further in his influence over his results. After viewing the output he may well decide to make minor alterations to the program and change the limitations of the distributions, size of the individual elements and so on. Alternatively he might even decide to change some of the major decisions.
To say that we have order on the one side and disorder or randomness on the other is an oversimplification. Two- or three-dimensional figures have many aspects and there are many ways of looking at them. Order with respect to what: choice of elements, number of elements, position, orientation, size, texture, colour? Each of these aspects can be analysed separately. In respect of any of these attributes we can measure the amount of randomness (or complexity) by means of information theory and auto-correlation. This has already been done to a certain extent in information aesthetics.
To demonstrate the concept of degrees of randomness, one can generate a particular scale of randomness (fig 3).
Let us take a 9 × 9 chessboard and colour the individual black and white squares according to the rule known as the Markov chain. We colour the top left corner black or white arbitrarily (i.e. a white square at the beginning being as likely as a black one). Let us assign a transitional probability of 0.5. Going from left to right and restarting each line at the left, we can decide the colour of each square based only on the colour of the last one. The chance of a change in colour is determined by the transition probability. Thus if the last square was black the chances are fifty-fifty that the next will be white and vice versa. The result is the middle square in fig 3. Now let us change the transition probability to 0.55 and repeat the process, plotting the result to the right of the first checkerboard; then we take 0.60, 0.65 and so on, up to 1.00 which means the certainty of change. What we then have is a scale going towards more and more regular alternation, till we get the regular checkerboard on the extreme right. One of the most interesting areas is around a transition probability of 0.9 (third from the right) where we have a regular alternation, occasionally broken by a continuation. Now we plot the results with a transition probability of 0.45, 0.40, 0.35 down to 0.00 to the left of the first checkerboard. We thus obtain another scale going towards more and more regular continuation, only occasionally broken by a change in colour at 0.10, and pure repetition at the extreme left. Starting with a search for one scale, we end up with two of them demonstrating the complexity of the problem.
I will use O Canada (fig 4) (fig 5) to give a detailed description of what I call controlled randomness. Here not only external constraints were imposed, but the degree of randomness permitted was also controlled. In this picture I have used four symbols appropriate to Canada's centennial in 1967. Here the major design decision was that all details should be settled randomly, but that the resulting pictorial elements should be arranged into a number, of clusters.
As a first step the frame was to contain one cluster. There are four figures: beaver, maple leaf, centennial symbol, Expo symbol. It was necessary then to decide how many figures would constitute the cluster. Only the two random distributions already described, were to be used. For a uniform distribution we need merely to specify the minimum and maximum values, e.g. 4 and 8, meaning that on any particular run of the program (i.e. for each cluster), there is an equal chance of ending up with 4, 5, 6, 7 or 8 figures. To use a normal distribution we specify, in addition, the average value (the most likely) and the standard deviation (which is a measure of the scatter of values, or how fast the density of values falls off in either direction). We might say, for example, that on average 6 figures should be used, with a standard deviation of 1, a minimum of 3 and a maximum of 12. (This would mean, that any number between 3 and 12 may be chosen, but ninety-five per cent of them fall within the range from 3 to 9.)
We choose one of the four figures with equal probability for each. (We could as well specify different relative proportions in which they should appear.) We position it inside the frame and specify probabilities for the size and orientation. The process is repeated until the chosen number of figures is reached. The resulting picture is now scaled down to about one-eighth size and drawn by the plotter. The next cluster is generated, and so on, until the required number of clusters are ready. I found that a considerable variation in size within the clusters, and between clusters, is necessary to obtain interesting results. It was decided, by the way, that the clusters, as well as the individual figures within them, could overlap each other.
Instead of making a composition from many elements, we can take one figure, such as a girl's face, and apply various transformations to it (fig 6).
A number of interesting mathematical transformations is possible, but here I was concerned with those involving randomness. The picture is represented within the computer as the co-ordinates of 660 points with straight lines joining them drawn by the plotter. She can be shaken up causing each point to move in the horizontal direction by at most 0.3 of an inch, with a normal distribution (average 0, standard deviation 0.3) (fig 7).
The picture can be of anything, including letters, the same program was used to shake the tower of Babel (fig 8). These programs were written in SPARTA, a programming language I designed for manipulating arbitrary line drawings such as these. The commands, easily learned by a layman, include words such as MOVE, SIZE, ROTATE, FRAME, RDSTP (random distort points).
We can also deal with lines rather than points, and break them into (controlled) random sections. Or we can allow each of the line segments to shift, change size and be rotated. In the case of fig 9, a shift of lines along the horizontal direction produced a distortion effect not uncommon in some modern paintings.
Since the degree of transformation and distortion can be controlled, we can make a sequence of figures which are continuously transformed from one to the next, and thus produce a film. Running it backwards and starting from the unrecognizable jumbled figure, order and recognizability increase as the girl emerges from chaos.
A set of programs by Martine Puzin produced results reminiscent of biological cells, cracked earth, ice floes, spiders' webs, Klee's paintings, micro-structure of crystals, honey-combs, soap bubbles and so on fig 10), pointing to parallels between the random patterns and distributions found in nature and in art.
The attempt to develop generational rules for patterns not only adds to the usefulness of the computer for graphic design but may also lead to a greater insight into the very nature of patterns and structures. Controlled randomness is offered here as a useful concept in this direction.