Mind and brains

Until now, I have avoided, but not excluded, referring to the brain. The straightforward question of the brain's relation to mind was partially answered when I said that minds determine the nature and scope of biological processes. But this is too general and probably insufficient. While this research took place and working hypotheses (mainly based on a semiotic premise) were advanced, my attention was caught by data from brain research (such as made possible through MANSCAN, i.e., mental activity network scanner at the EEG Systems Laboratory), and especially by recent results concerning biological brain processes pertinent to learning.

1. Synaptic development and learning

Shortly after World War II, Donald Hebb (in his Organization of Behavior, 1949) suggested that the repeated use of particular neurons - let me recall that a normal brain consists of about one hundred billion neurons connected with other brain cells through axons - causes a long term change in their structure and facilitates the future passage of information between them (as one can learn from medical literature or any decent psychology book). The use Hebb referred to is learning. He also wondered how neural structures sustain thought. (This has to be understood against the background of the opinion that neural structures form the implementation medium of human intelligence.) Later on, William Greenough, working on newborn animals that he subjected to complex environments, noted dendritic and synaptic developments. He also found that depriving a developing brain of stimulation affects the number of synapses and even their size. However, only very recently, a group at the University of Toronto (Ted L. Petit reported the results in 1987) was able to show that learning exerts a direct effect on the configuration of neurons and influences maturation.

Two observations should be made before continuing with the biological evidence supporting my ideas about the mind:

1. It is not from the brain matter to the mind that changes in the mind's state are triggered, but from the mind that biological changes of the brain are induced.

2. These changes result in new biological configurations; that is, the relational network of minds, through which each individual mind comes to expression, determines the configuration of the brain.

In what follows, I shall make good the promise to address the issue of variable configurations, as it pertains to the mind and the brain, and also the subject of the anticipatory nature of the mind. Within this framework, a dynamic model of the mind will be presented. Practical consequences of this original model will be discussed when current research in artificial intelligence is scrutinized.

From each neuron base within the brain arise dendrites (in the form of gnarled branches) supporting many thorny protrusions known as dendritic spines. Cells are connected to other cells; axons emit chemical signals received by the dendritic spines. The points of contact - synapses - make this connection possible. A stimulated neuron absorbs calcium through small pores into the axon terminal. The influx triggers synaptic vesicles to release neurotransmitters into the gap between the presynaptic and postsynaptic disk. These activate the dendritic side, and information is transmitted from one cell to the next. It is proven that periods of stimulation of the hippocampal neurons are accompanied by a sudden proliferation of synapses and an increase in diameter and curvature. In short, and in Petit's words, "Synaptic development is a direct response to the experience of learning". Petit's statement implies the distinction between the level of description (neuronal information processing, for those who consider mind processes as a class of information processing transformations) and the level of abstraction that the human being reaches in practical life. Learning is one of the interactions through which minds are constituted.

The role of mind in producing changes in the brain is acknowledged in data furnished through experiments, such as those reported by Petit, et al. It is likely that other forms of interaction (evaluation and planning) can also be simulated through appropriate stimulation of neurons. The direct consequence of such stimulation is the increased ability of neurons to receive and relay information. (This refers to the level of description.) To increase the number of synapses on a neuron is actually to increase the number of channels of interaction with other neurons.

Interaction of the mind with other minds at the level of abstraction required by complex practical endeavors requires the multiplication of the channels of interaction among the neurons. In addition, larger synapses and the increased curvature of the synaptic disk could trigger even more neurotransmitters and generate new receptors, or expose receptors hiding in the postsynaptic membrane. The greater thickness of dendritic spines augments each spine's carrying capacity, improving the odds that a neural message crossing the synapse will reach the nucleus of the cell. Mind activity thus consists not only of recruiting all the brain's sources, but even of multiplying them.

The mind does not result from biochemical reactions, but from the level of abstraction, as required by the exigencies of our practical existence. Our mind potential and the plasticity of neurons (the ability to sprout new synapses when activated above routine levels) are obviously interrelated. Learning, as a continuous human experience, evaluation, and planning, are no longer abstract experiences, but determinant factors of mind development.

2. Self-organization

The brain itself is "a-logical and a-rational" (cf. Margolis). While a brain theory, is after all quite a difficult physical (or biophysical) theory, a theory of the mind is much more difficult to formulate, exactly because of the level of activity at which the mind can be defined. We know that our language ascertains the entity of mind; that our use of the word, in various contexts, hardly suggests the identity "brain = mind"; that our existence, while determined in many ways by our biological status, extends beyond it; that striving to understand our own functioning, we are able to reflect upon ourselves almost detaching our thoughts about thinking from the thinking itself. In short, it seems that we implicitly acknowledge, together with the meta-realm, the higher order of the activity of the mind (in respect to that of the brain).

Definitely, there is some circularity in my argument, which results from the fact that we ascribe to the mind a category of intellectual artifacts characteristic of the meta-level, among which at least one (the mind itself) actually represents our object of interest and inquiry. To ignore this circularity would mean to condemn the enterprise, adding one more example to the long list of homunculus-driven theories. (If there is something we don't understand or can't identify, the homunculus does it!) To acknowledge it without attempting to affect the circularity is gnoseologically inefficient. So the task is - and many more in these days have assumed the task - to address its very ground and to see whether the circular nature of the argument does not by itself constitute a source of better understanding of the subject. The brain does not "know" the world in which we live, or our internal world. The mind does. And the source of this "labeling" activity is not the homunculus, but other minds, or more generally speaking, our existence in the network of relations that constitute our life experience (in which and through which we project our biological identity).

Evidently, what D'Arcy Wentworth Thompson called "the function of growth" is probably the pre-eminent characteristic of the brain, at least in the initial phases of its ontogeny. As some researchers have shown, in the nine months preceding birth, the arrangement of the components of the brain is determined ("dendrites sprout from the surfaces of neurons, branch out in all directions, and overlap other dendrites", cf. Levine). Afterwards, functional groups are formed by having the strength of synapses modified. Moreover, a drastic editing - we reach childhood with little more than half the number of synapses we had eight months after birth - is also acknowledged.

The increase of complexity in the system of the brain is symptomatic for what is termed "self-organization", and explained in the language of dynamic systems, or what H. Haken called "synergistics", Ilya Prigogine, "dissipative systems", Maturana and Varela, "autopoiesis". The increased organization is the result of such a system's dissipating entropy to the surroundings. Such systems are the object of research in artificial Life studies, and extension of the mathematics of dynanmic systems.

Information is a weak description of organization - messages of high information are rather disorganized. The message, affecting intensities and directions of synapses, gets "engraved", "hardwired" in that profoundly redundant system that our brain constitutes. The value of the messages that are exchanged by our minds within the framework of our interactions in work, learning, exercising, etc. is represented by what it would take (if we could describe the change in organization in some measurable entities) to derive a minimal mind context (of only two different human experiences), or, if we could emulate it in some computational or neural processing device, what kind of computation or neural processing would be required in order to arrive at the same configuration.

The enormous number of variables necessary to describe the biophysical system of the brain does not remain constant, since, as we have seen, over the lifespan of an individual, changes (in the number, intensity, and direction of synapses) take place. In the language of dynamic systems theory, the variables describing the system can be represented as coordinates in an abstract state space, usually called "phase space". Hence we are contemplating a phase space with a changing number of variables necessary to describe the brain as a system. I would argue that there must be a metaphase space (a higher level of phase space) with a finite number of variables defining the state of the space of brain variables (i.e., the change through which the brain goes).

Mathematical descriptions of the brain system and mind descriptions are by their nature simplifying abstractions. The burden of interpretation of any mathematical description results from the tension between the abstracted and the abstraction. The brain, as a self-organizing system, lowers its entropy into the surroundings. Its state changes continuously. In some ways, the brain is a set of brains, each with its own states. Within the brain, some modes are likely to dominate others, and stability can be reached in states of broken symmetry (such as the dominance of the verbal or the visual). What we know from biological research is that learning takes place (via the mechanism of the mind) and that if damage is induced to the location associated with a trained function, that function is lost. If damage occurs before training, learning is still possible, that is., the system reallocates resources. This clinical observation is important for a better understanding of what I call the "taming of the brain", the transition from chaos to order.

Self-organization as a mathematical notion (itself a product of interacting minds) zeroes in on collective behavior. Changes in the "coordinates" of the phase space result in qualitatively different behaviors. All this sounds attractive because, indeed, we notice such changes in the phase space of brains, and we notice the nonlinearity of the process. The danger is that we take the product of minds and use it as input to other minds without knowing whether what happens here is processing according to the premises of the mathematical model or according to the subject we apply it to. This is why I suggest that we consider the metaphase space that defines the state of the phase space. All this sounds more complicated than it is, but it actually boils down to ascertaining that if brain states change, this change is under the control of the mind. Moreover, instead of dealing with fixed configurations, we deal with variable configurations that result from the need to operate under rules of optimization.


3. Experience

The record of the brain's phase space control by the mind is our experience. Over time it displays properties characteristic of chaos. Some experiences are quite linear; but the majority evolve around attractors - fixed points, limit cycles, and, especially, "strange attractors". Fixed point attractors correspond to experiences in which the initial state (start-up condition) leads to what is called a "basin of attraction" (an area of possibilities that eventually brings the system to a definite resolution). When we search for a word for a crossword puzzle, the set in which we search is the basin of attraction; the word found is the fixed point.

The initial state - definition and background information - leads to basins of attraction different from person to person. The limit attractor corresponds to moving in a loop as the solution to the phase space. The classic example is the mathematical modeling of the human heartbeat (Balthasar van der Pol, 1920). It seems that metabolic chemical reactions in living organisms explain the "inner clock". The mind can change the initial conditions, and thus the inner clock adapts to new circumstances. More complex is the quasi-periodic attractor, describing a return to the same state but not quite. Memory is a good example here (although some place it in the category of limit cycle). Strange attractors, actually corresponding to strange paths that diverge quite rapidly, correspond to a dynamics very sensitive to the choice of initial data. I would risk to exemplify this by what is called "inspiration" (in research, artistic work, etc.) or by intuition. A slight variation of the initial condition can entail a non-predicatable result: Eureka! There are many kinds of such attractors resulting in various forms of chaos (between quasi-periodic and non-periodic).

Chaos seems to have a structure represented by the geometry of strange attractors. The metaphase space evidently reports on the chaos of chaos, and as such can be seen as an ordering process able to zero in on specific space-time sequences.

We have, to date, acknowledged (and improved upon) some good, or at least acceptable, description mechanisms, but we have not yet approached the actual process; that is., we have not delivered an explanation. Some have, such as Gerald Edelman, who introduced a neural Darwinistic model (neuronal group selection). What it actually says - and others have said it before in a different language - is that competition through selective inhibition (from one cell to others, from one group to others) takes place, and only the best equipped (which turns out to be the best trained) survives. Specialization of certain areas and positioning (a topological extension into the geometry of the brain) are of extreme importance. In addition, an intrinsic mechanism of self-similarity is evidenced in the network. While no two brains have the same network of synapses, it is quite clear that what distinguishes the appropriate minds are not numbers of neurons or connections, but their interaction with other minds as this takes place in the process of self-constitution of the human being in praxis.

To the role of experience - selective strengthening or weakening of neural connections - Edelman contributes the hypothesis that cell adhesion molecules (CAMs) are involved (experience alters their distribution and therefore the map of synapses) and that the mechanism controlling the process runs according to Darwinian principles. This is a strong metaphor, and even the speculative aspect should not be easily discarded. But even if confirmed, it only gives a description at the object-level, failing to address what experience is (definitely more than a collection of stimuli), and even more important, how emotion, beliefs, motives, etc. influence the process.


4. How do minds anticipate?

Each time explanations, such as the ones mentioned above, are given, there is the chance of finding the homunculus behind the explanation. Indeed, the mind, as I defined it, seems to be the homunculus; it "knows" - or seems to know - how to operate on the phase space of the brain and optimize its activity. It "is" in anticipation of events and controls the intellectual identity of the person. It is time to prove the assertion.

It was initially established that neurological control of body movement originates in the cerebral cortex. In 1930, Wilder Penfield triggered actions by electric stimulation of electrodes attached to subjects' brains. Conversely, it was shown by Hans Kornhuber and Lüder Decke (1963) that changes in the voltage of brain waves precede movements . The so-called "readiness potential" is the time before the action when neurological activity is measured. This time is 800 milliseconds, much longer than the time needed to transmit a command from the motor cortex to the muscle.

This first anticipatory step was interpreted in several ways ("Has the brain a mind of its own?") until new measurements of the time when the subject actually becomes aware of its intention to act showed that this happens 450 milliseconds before the act. The final 150 milliseconds remain an interval of reconsideration. This final research is the work of Benjamin Libet (who also worked with John C. Eccles, one of the main proponents of the independent reality of the mind). While some were quick to celebrate a proof of Freud's notion of the unconscious domain (where human will, according to his theory, is rooted), as far as my model of the mind is concerned, this provides evidence of the anticipatory nature of mind activity.

The rather ample interval between the initiation of movement and the movement seems to suggest that the mind is in anticipation of events and performs an updating function that integrates patterns from previous experiences. From the metaphase space to the phase space of the brain, various possible events are triggered, from among which, according to circumstances, only few are actually realized (some even stopped shortly before being carried out). So much for data.

Concerning the processes of our self-constitution, it can be stated that minds represent the medium of our continuous self-constitution. As agents of our interaction with other minds, and with the world, they make us part of all these interactions. Let us recall that Peirce, in his semiotics, expressed this idea when he defined the human being as part of the sign it interprets.

Each of our instantiations takes place in a domain of infinite possibilities characterized in terms of the possible relations through which minds are constituted and identified. From the metaphase space of the mind, mappings of the phase space of the brain are continuously submitted for instantiation as new configurations. The "hardware" is actually variable and used in the most optimal way. All these successive configurations are in anticipation of events and occurrences, respecting patterns of sameness, of self-similarity (which account for the notion of personality) and of scaling (which accounts for the notion of human types).

But we should not lose sight of our own premises. Indeed, if minds exist only in relation to other minds, it is not false to assume that this relation is of the order of one (the entropy dissipating brain) to many (minds) and thus of self-configurational potential. Growth results from differentiationÑa process from which the biological endowment benefits over generations. Brains do not label the world; they receive it either labeled (and categorized) via the optimization process from the metaphase space or in noncoherent frames, when minds have not yet organized it according to those practical rules that are established within human interaction. The knowledge of human interaction is necessary to the extent that, to survive and prosper, we interact. The selection process is indeed relevant at this level, but not at the level of neuronal groups. In this process, there is always something ahead of us (time, places, events, other minds), thus an intrinsic anticipation striving.


Mind and body

This was the first part of the biological side of the story of the mind. The second is even more intriguing. In 1885, Edwin Goldman and his student Paul Erlich noticed the so-called "partition" between the blood and the brain. (There are other partitions, too.) The brain, although connected to the circulatory system, is shielded by a selective barrier. This barrier is a highly dynamic structure that allows the brain to exchange substances with the body's internal environment without jeopardizing its own biological integrity. The brain, controlled by the mind, has a quasi-independent status in respect to the rest of the body. Under extreme circumstances (such as those allowing the AIDS virus to cross the barrier, where it attacks neurons and their structural support - glial cells - causing memory loss, palsy, dementia, and finally paralysis), the brain's independence of the body ceases, and consequently mollifies the relative independence of mind to body. As unfortunate as this extreme case is (the barrier allows the virus to pass from the bloodstream to the brain, but bars entry to the only drugs showing promise of inhibiting the virus elsewhere in the body), it makes us more aware of the fact that minds are not biological in nature, but semiotic, cultural. They seem to maintain an existence outside our body, while coextensive with our physical reality. This observation explains, for instance, the mysticism surrounding the mind. (Needless to say, Western civilization knows no mysticism surrounding the body.)

The relative independence of the mind from the body refers to the fact that some characteristics of our minds' interaction can be associated with bodily functions. For instance, the adrenal glands (which look like blobs of tissue seated atop the kidneys) play an important role in our alertness. The speed at which minds interact and the depth of their interaction can be increased by the adrenaline pumped into the bloodstream, or reduced by endorphine secretions during infections. But I claim that this happens under the control of the mind and as an expression of the needs perceived in interaction, needs for whose fulfillment the mind enlists all the body's potential. In addition, our bodies receive most of the information defining the physical context. When isolated from the world (i.e., deprived of light, sound, and other sensory data), the body gradually ceases to support the interaction of the mind with other minds. This is of extreme importance for maintaining the integrity of the system and for avoiding self-destruction.

That entire mind-brain/mind-body discussion continues today in the dialects of information theory and computer science theory (as well as in the language of genetics) testifies to our continued acceptance of the epistemology of representation (which has expanded to genetics, as well). But it is exactly at this juncture, as we have engineered the most efficient machines to manipulate, that we can free ourselves from the domination of this epistemological model. This will not happen by submitting only nonrepresentational explanations of the mind, of the genes, or of the world. We should integrate representational, nonrepresentational, and communication aspects and make these more comprehensive explanations part of our new practical experience.


Self-similarity of minds

We already alluded to sameness (and defined self-similarity as the expression of sameness). The self-similarity of our minds, on which the category of personality finally rests, suggests new ways to model minds and even to continue research with the help of mathematical tools. I know that in describing the mind as a process of anticipatory configurations, I implicitly make reference to topology (within which configurations, as forms, play an important role), although minds are characterized by irregularity and fragmentation, not by topological regularity and connectedness.

Like coastlines, minds are "broken", "tortuous", "tangled" ("Oh what a tangled web we weave, when first we practice to deceive", said the poet), "hydralike", "ramified", to use qualifiers provided by Benoit Mandelbrot, who first introduced us to the geometry of those forms. To a great extent, they also recall Brownian motion (of particles suspended in fluid). The direction of the straight line joining the portions occupied by a particle at two instants very close in time is found to vary absolutely irregularly as the time between the two instants decreases. In both cases - coastlines and trajectories of Brownian movement - we deal with what is called in algebra (from the Arabic jabara, meaning "to bind together") a "function without derivative", which means a curve to which no tangent can be drawn. Coastline and Brownian motion trajectories are described by what Mandelbrot calls "fractals" (which refer to how things are broken, or irregular, as opposed to algebra, i.e., the study of how things are continuously bound together). It seems to me - and I shall shortly outline the reasons for my argument - that we can speak of a fractal geometry of the mind, in rather much the way Mandelbrot, in his deservedly well-praised essay, The Fractal Geometry of Nature, did, referring to forms and phenomena in nature. (As a matter of parenthesis, I asked Mandelbrot for permission to paraphrase his title. He courteously suggested that I'd be better off coining my own metaphor. He is obviously right.)

Minds have an infinitely granular structure, displaying a more general notion of continuity than the one we are familiar with from Euclidean space. We, as observers of minds, intervene in their reality. Minds consist of endlessly embedded configurations, displaying a special type of inner infinity. Minds have more than one dimension, the determination of which is a matter of degree of resolution. Mandelbrot uses an example that applies to our subject as well. A ball four inches in diameter, made of twine 1/32 of an inch in diameter, displays several distinct effective dimensions. To a faraway observer, the ball appears as a point, i.e., zero dimensional. As it is approached, it is perceived as having three dimensions. As we get even closer, we see a mass of one-dimensional threads. If we manage to come as close as 1/10 of the diameter of the thread, each thread looks like a column, and the image regains its 3-dimensional status. At closer range, each column dissolves into fibers, and the ball again becomes one-dimensional. The crossing of dimension from one value to another corresponds to the closeness of our observation. Under an electron microscope, we see the points again, and thus the ball becomes zero-dimensional. So our evaluation depends on the relation of the object to observer.

In the case of minds, this is even more evidently so. According to our interreaction with minds, we notice a succession of different effective dimensions. However, there is also order to the minds, which can be expressed by their relative invariance under change of scale and probably under displacement. Self-similarity corresponds to the invariance under ordinary geometric similarity; possiblistic (not Mandelbrot's statistic sense) covers invariance under displacement. Minds are probably better described by self-mapping and nonscaling fractals.

The property of self-similarity (or scaling) is connected to the notion of mind dimension. This is the only instance in which some elementary mathematical formulation will be suggested in the text. (Other mathematical descriptions are omitted.)

A one-dimensional object (such as a line) can be divided into N identical parts, each of which is scaled down by the ratio r = 1/N from the whole, that is,

                                                               Nr1 = 1                                                (a)

A two-dimensional object (e.g., a square in the plane) comprised of N similar parts, can be scaled down by a factor

                                                                   r =    1                                                (b)
that is,                                                     Nr2 =  1                                                  (c)

A cube can be divided into N little cubes, scaled down by a ratio

                                                                   r =    1                                                (d)
that is,                                                     Nr3 = 1                                                   (e)

By generalization, for an object of N parts, each scaled down by a ratio r from the whole NrD = 1, so that the fractal dimension D is

                                                                 D =  log N                                            (f)

which means that a D-dimensional self-similar object can be divided into N smaller copies of itself, of which each was scaled down by a factor

                                                                   r =   1                                                (g)

The fractal dimension does not have to be an integer. (In Euclidean space, it can be only an integer.) The property, which I state here, that minds can look "possibilistically" similar, while at the same time different in detail at different scales, is important for understanding the dynamics of mind configurations. Under non-uniform scaling, which is probably a closer description of mind as processes, we have, instead of self-similarity, self-affinity, which captures the similarity of the process, not of configurations.

What, if anything, do we gain from pursuing the path of fractal geometry?

First, we free ourselves from the obsession of finding some final element that explains the mind. Minds are not composites of elementary minds, but infinite processes displaying properties on non-regularity and fragmentation.

Second, we understand the subjective nature of any attempt to capture and measure minds.

Third, we understand that it is exactly by projecting fractal nets, i.e., with the characteristics given above, that our minds work in anticipation of events.

Probably, the practical inference of this understanding is that our attempts to compare and measure minds were fundamentally wrong, since they sought regularity and connectivity, as our IQ tests do, and as computer science and artificial intelligence insist on discovering (for at least partial domains). If we indeed want to keep measuring what is immeasurable, I suggest that a fractal dimension would do more aptly than other methods. The fractal dimension can be applied as a computable parameter for the design of "intelligent" machines.


A direction for action

Throughout this text, practical implications of the model of mind I suggest have been alluded to, and in the sections to follow, I shall concentrate on some of them. Since the model is based on the understanding (intelligence) of the dynamics of human practical life, it is incumbent that such practical aspects be considered. Many can be approached by the use of a methodology similar to the one I follow, or by generating new methodologies faithful to the spirit of the model submitted here. I shall start with the computer, the machine that so many would like to make a substitute for the mind, or at least use to explain it. As already stated, chances are good for explaining intelligence, in the precise sense of understanding introduced here, and for emulating intelligent decisions. However, in respect to the mind, explanations seem less promising at this moment.


Minds, computers, and memory

Digital technology is so pervasive that, in a rather short time, we shall be able to speak about the hybrid mind - computer and human being. The nature of this technology requires that we not only create circumstances for its best and most efficient use, but also that we do not detrimentally affect to the contrary the context of the creative unfolding of our minds. Norbert Wiener aptly qualified machines that perform intellectual functions as representative of the second revolution (the first, of the "dark, satanic mills", and the second, "bound to devalue the human brain"), and was quite firm in his warning concerning the implications of the use of these new machines that can perform mental functions. As the father of cybernetics, Wiener correctly pointed out that once we build and use tools (mechanical contrivances) that take over the functions of our arms, or even our minds, we put ourselves in a subservient position. A relation of dependence is established, and whatever we gave up is reflected in a new status of our general sense of responsibility. Exploitation of machines is not neutral.


1. Computers and representation

Based on the representational model, our computers consist of a CPU, internal and external memory, input and output devices (which can be processors). The processing of discrete representations, of symbols in particular, takes place under the control of programs written in formal language.

Figure 1.

As long as the task is repetitive, such a configuration is quite efficient. If the problem submitted is computable, we can expect the result within a reasonable time. Tremendous progress has been made in using computers for problems requiring massive number crunching, as well as for those implying symbolic computation. However, once the complexity of the task increases, as is the case in analyzing and generating images and sounds of natural language recognition, and in building so-called expert systems, we notice that we need not only better algorithms, in order to achieve acceptable efficiency and quality, but also more powerful CPU units, specialized hardware (machine architecture adapted to the kind of computation), extensive memory, and very good input/output management. From all these elements, I choose to concentrate on computer memory.

Today, memories go into the range of hundreds and thousands of gigabytes, and with the increase in memory, problems of memory management, retrieval time, reliability, and complexity arise. It seems that the more memory computer centers dispose of, the more those who work in them require. Instead of rethinking their programs in view of the restricting circumstances, they prefer the easy way out (no matter how expensive). This is well known among computer scientists. It is said that "There is more than one way to solve a computer problem. The first solution is always to buy more hardware, which is almost always the wrong solution", (cf. W.F. Perry).

Let me start with an example: In order to store images generated for visualization, the necessary memory takes much more space than the prints with the images. Faced with this problem, computer scientists realized that they can store programs and when an image is required, run the program and generate it. The difficulties encountered point out shortcomings in the strategy applied. A moderately talented artist does much better using less memory and faces fewer problems keeping track of the work. Not even the entire installed computing power in the world would suffice to imitate the entire production of a painter like Picasso - let alone to actually generate original work, the creation of which represents an entirely new human experience. (Some would use laser disk technology or digital formats to store this experience.)

The reason is that we continue to think of computers in representational terms - not as a medium for interaction with the human mind or as a constitutive tool, but as a facilitator of imitations; not as an interactive communication technology, but as a primarily functional performance. Computers - PC's or Crays, sequential or parallel - come in fixed configurations (adaptable within limit), requiring software for solving precise problems to drive them, and people trained to "understand" their "language".

The alternative I suggest is a computer with variable, even soft, configurations (sequential as well as parallel), which can be programmed to use as much computing power as the problem requires, and as large a memory as necessary to solve the problem. Variable configurations are possible insofar as we can open or close circuits in software, instead of functions. The entire Boolean logic architecture can be modeled using fuzzy logic instead of clear-cut logic. Once started, processes could be designed to be their own short-term memory, i.e., to carry information pertinent to the process in much the way we send radio and TV signals: loaded with information without which their reception is not possible. It would be even better to process in, much the way our minds do, recreating information as the process needs it, not as it is predefined. Storing, checking for the integrity of the information, and matching are useful only when they do not become a goal in themselves.

Considered in the abstraction of computation - not in the context of the information processed - memory storage and management require considerable resources. In variable configurations, memory storage and management would become an issue of the "intelligence" we endow the system with through programs and rules. Not only would we save money by avoiding excessive memory - probably the costliest part of our current computers - but we would also be able to approach problems for which we do not yet dispose of computable functions. The use of computers approaches our expectations as we learn how to interconnect them. The interconnection would also allow for the re-creation of data, as required by some processes. This is not a matter of communication protocols, networking, or file servers, but of programmed interactions for achieving a "critical mass," and thus reaching the equivalent of the process through which human minds are constituted.

I want to repeat here that open systems are the only way we can use the power of computers, because in open systems, the critical mass of mind interaction can be reached. My opinion is that computers are a medium (among other media) for constituting the critical mass of minds, i.e., for engendering new forms of human practice. What we have learned from the experience of storing images is that we have to program processes which contain procedures to re-create, rather than store, data, to interactively generate alternatives, not so much to exhaust a problem by outputting all the values, whether relevant or not to the experience for which the computer is used. Actually, human beings memorize little. They use many procedures to re-create information. This is why it is said that we do not remember things, but memories. This is also why recollections of the same event differ so much from person to person. To generate our own memories as we need them is a much more efficient method than to store everything. We have to start thinking about memory as a medium pertinent not to the level of description (always requiring more storage), but to the level of abstraction (always keeping less, but what is essential).


2. The uncomputable

In 1986, Peter Kugel suggested that we "look at parts of human thinking that seem (to some of us) to involve more than computing and try to develop precise uncomputable models of them". For this purpose he applies the mathematical theory of uncomputability (recursion theory, if trial-and-error sounds to simplistic). Indeed, there is a need to take the discussion out of the emotionally and culturally biased context in which we contemplate our substitution by machines and to raise it to the realm of a theoretical controversy in which we compare models of the mind (level of abstraction), not minds per se (level of description). In this context, we should be able to understand the epistemological error we keep making. Mind exists only in the plural; reconfigurations are in anticipation of problems and not in reaction to them. Intelligence is process.

In computational terms, we should be able to intelligently connect machines, program the optimal use of resources, and maintain a process of reconfiguration that actually makes each computer a potential plurality of computers.

Figure 2.

The program selector, which chooses appropriate programs from the set of all shared available programs, here becomes a critical component. Kugel gives the example, "It might, for example, study the situation and decide that it was time to use ANIMAL RECOGNIZING PROGRAM rather than the BEAUTY APPRECIATION PROGRAM". As is the case with human minds, the selection of a program is a matter of context. We can afford to maintain the current computer, a context-independent machine, only as long as this independence does not affect the practical purpose of the activity in which we involve computers. I submit that a "Black Monday" on the stock market is still an acceptable accident of this context abstraction in comparison to any situation in which, in the absence of context considerations, computers would launch nuclear missiles. The anticipatory nature of human minds results from the context of interaction which defines those minds. Contexts defy the domination and absolutism of representation; they are the space and time of interaction, within which our presence projects previous experiences in view of new experiences.


3. Connectionism

Sequential hierarchical computers execute, in very fast pace, one operation at the time. They are algorithmic machines. This modus operandi introduces many restrictions, reflected, as we have seen, in unjustifiable memory requirements, processing cycles, error handling, etc. For any problem to be solved on such a computer, it has to be represented by a computable function and it has to be tractable (solved with available computing resources in an acceptable time).

Neural networks (actually connectionism, a family of statistical techniques for extracting complex, higher order correlations from data) make some non-algorithmic computations possible. The connectionist paradigm (inspired, as we know, by neural architecture) substitutes parallel distributed processing for central serial (hierarchical) processing under the assumption that it will have access to cognition processes at subsymbolic levels.

The connectionist alternative, embodied in networks of connected units with weighted interconnections, is not hierarchical. It emulates, to the extent such emulation is possible, and within our relative knowledge of the brain, the "biological engine", claiming the ability to explain learning (and use learning techniques to accomplish some tasks). Massive parallelism, distributed information storage, and associative interconnections contribute to the simulation of intelligence. Moreover, the dynamics of such networks evidences nonlinearity and chaotic behavior, characteristics that many associate with or attribute to human intelligence. The focus on the physical and biochemical reality of the brain is probably the strength and weakness of connectionism.

As Pagels so expressively put it, this is the syndrome of "The Man Who Mistook His Brain for His Mind". In some radical expression, I claim that if we could reproduce in detail (at the highest level of detail!) a brain as we see it, at a certain moment in time (taken from a corpse or extracted from a living being), the result of the functioning of this reproduction of the brain would qualify it as intelligent as much as Leonardo da Vinci's splendid orinthopter qualifies as a bird or, for that matter, as an airplane. The emulation of the brain as an information processing system, as well as its emulation as neural network (on a realistic scale, i.e., in the domain of billions of multichanneled connections of changing intensities), are valid engineering tasks and will undoubtedly result in the automation of quite a number of intellectual activities.

But our understanding of, and eventually our ability to duplicate, human reasoning is not dependent upon the building of the neuronal chip (which mimics in silicon the behavior of neurons) as it is dependent on appropriate integrative theories of the mind, or at least some essential aspects of it. Optical, and later, molecular computing will definitely help us test more of the hypotheses of such theories, to simulate them, and eventually to make possible the hybrid man-machine machine that will allow for even higher accomplishments of our intelligence.



Education is the second practical field I would like to consider. The broad context is defined by the tendency to pass from the dissemination of declarative knowledge (of facts) to the dissemination of procedural knowledge (of skills, of how to perform an action). We educate people, along the logic of representation, as problem solvers and reductionists. They perform well if the problem does not deviate too much from the example they learned or if it can be reduced to some pre-established scheme. Their performance decreases alarmingly when we require creative effort from them, i.e., when reductions or permutations are not possible. The institution of education embodies the same characteristic. As an institution based on tradition, it is fit to process people, but it is not necessarily in the position to constitute an environment for interaction such as required for constituting minds and not reducing human beings to operators.

Within education, students are introduced to commitments as they were established in past experience (the reification of tradition). The institution as such is a network of commitments. My claim is that instead of contributing to the constitution of minds, the institutionalization of education and its focus on procedural knowledge end up preventing that constitution. There are several factors determining this process:

1. ignorance of the requirement of the critical mass as a necessary condition;

2. lack of structural mobility, as required by the need to stimulate mind reconfigurations;

3. problem solving philosophy, instead of problem-generating function;

4. obsession with means to achieve assumed social purposes that end up hiding the purpose;

5. self-perpetuating drive, so that external factors ensure perpetuation of socially disputable functions.

I shall briefly deal with each of the abovementioned aspects.

1. To reach the critical mass is an exceptionally complex goal. The subject is not the number of students, but the circumstances of interaction and the quality (intensity and breadth) of the process. The feudal attitude of academia actually reflects a perceived need to provide an environment for this interaction. The breakdown occurs when territoriality becomes an issue in itself, not one relating to maintaining quality. The efficiency of mind constitution transcends the efficiency of the investment in education. Saving or spending money for reasons having to do with the institution, not its educational purpose, results in the loss of potential minds (because conditions for the establishment of mind interaction are not met).

2. Once territoriality is established, tradition perpetuates segmentation, no matter how counterproductive it is. The impossibility of objectivity in respect to the traditions we belong to (they appear as belonging to us, and this is misleading) results in the negative effect of tradition on the process of mind constitution. And when traditions are not available, we fake them and institutionalize them. The accent is henceforth put on form, not on substance.

3. As a result, education seems more and more involved in a catch-up game instead of in the exercise of initiative and the making of new forms of human experience. While failing to understand the nature of mind processes, education has become a packaging or canning industry, a service. It is affected by the aging syndrome which, as I have already mentioned, affects the anticipatory quality of the human mind, in this case, of our institutions.

4. The tragedy is that education does not notice this (or does not have the intelligence to understand it), since the self-perpetuating drive prevents not only learning, but also self-assessment or self-awareness (evaluation), as well as projection of goals (planning). Instead of pursuing processes of education, it pursues technologies of training.

5. It has become acceptable that education only represent the educational ideal, thus abandoning its most significant characteristic-purposeful, practical experience. In fact, education has become parasitic because, not exercising any anticipatory function, it is a training medium for skills, not a context for the constitution and interaction of minds. Instead of the humility of knowledge and doubt, education disseminates the impertinence of certitude as it results from its limited training goals and service functions.

All in all, a strange circularity characterizes the process. Education claims that society determines what it should accomplish; and what it accomplishes determines the society according to the perceived claim. Obviously, there are ways to change this, and my suggestions, once again, no less than my criticism, result from the model of mind submitted, especially from the social implications of this model. The condition of the mind is plurality, and interaction of minds is the concrete form of this plurality.

The quadrivium, which corresponds to the experiential context of ancient Greece, offered two disciplines of practical anticipation (music and astronomy), one of constitution (arithmetic), and one of representation (geometry). Under new practical circumstances such as ours, we should be able to offer an appropriate "quadrivium". Interaction cannot be imposed upon people through legislation; it should result from the necessity of their practical experience and from the new conditions this creates. This experience is segmented due to labor division and alienated due to mediations involved in our reciprocal relations. Accordingly, education has to constitute networks of interaction corresponding to the nature of our minds and to the brain and body, whose processes the mind controls. We have to address the conscious and the intuitive components, to educate intuition, and to allow for the mind's anticipatory characteristic. The asymmetry of the individual brain corresponds to the asymmetry of our mind. Education should cease the uniformizing action it exercises (at various levels) on its subjects and accommodate the individual in his or her irreducible characteristics. Obviously, the concept of democracy, a representation of an abstract ideal, cannot, if turned into an instrument of opportunism, serve as the structuring element unless we really intend to reduce the variety of minds to two or three acceptable types.

Like minds, education has to anticipate events, not merely follow them. As the institution of education corresponds to the brain (in its relation to minds), conditions for learning should be created accordingly so that learning ("mathema" means "what is learned") is followed by a diversification of possible interrelations, by an increased number of channels of communication, and by increased capacity for supporting human interaction. I would go so far as to claim the need for a barrier similar to the blood-brain barrier, which would shield education from society (and the political surges it goes through) to the extent such a shield is necessary. No doubt, education needs exchange with society, but a selective barrier will ensure proper conditions for mind constitution. Fundamental research, for instance, is not possible without such a selective barrier. A balance between how we support representation-oriented functions (in particular, problem solving), constitutive functions (on which creation of new values rests), and communication would allow education to play a role which goes beyond servicing needs.

There are many more practical lessons we can get from studying the mind. After all, what I have tried to say is that practical experience is the entry to the mind. Theories are never better than the practical experiences we project onto them.

We are going to open up a discussion space around this text, and so if you would like to comment on the text, or ask questions of the author please CLICK HERE

All comments, Questions and Answers will be published in this space.

Home Oikos

Ecology of Mind

Co-ordination page





Von Glasersfeld