The principle of general intelligent action and design of complex adaptive systems
2 octobre 2008
Â
Â
THE PRINCIPLE OF GENERAL INTELLIGENT ACTION
Â
AND THE DESIGN OF COMPLEX ADAPTIVE SYSTEMS
Â
Â
Â
Jean-Louis LE MOIGNE
Professor at University of Aix-Marseille III
GRASCE, ERS CNRS 166
Faculté d’Economie Appliquée – Centre Forbin
15-19, Allée Claude Forbin
F-13627 Aix-en-Provence Cedex 1
Â
Invited conference , IIIE-SMC-Le Touquet 1996 & NR GRASCE n°96-08
Â
Â
Â
Â
ABSTRACT
Â
During a century, the engineering of social organizations has been developped on the principles of the Energetics Paradigm, and mainly on the Principe of Least Action (or the Optimizing Principle). We understand to day that business and administrative organizations are not first processes of interaction between matter and energy (the enegertics paradigm), but processes of conceptual interaction between information and decision through organization (the so-called « inforgethics paradigm »). One can identify the two related principles on which such a paradigm can be based : the principle of Self-Organizing Systems (or of Equilibration), and the principle of General Intelligent Action. Some engineering consequences of this second principle are considered for the design and the management of social intelligent (or adaptive) organization and, more generally of complex adaptive systems : the engineering of behavioral symbolization, the engineering of decisional heuristics search and the engineering of recursive perception (or of self-organization).
Â
Â
Â
Our collective understanding of the concept of social organization (business and administration), and, more generally of complex adaptive systems, is, since two centuries, mainly built on the familiar features of the Paradigm of Energetics : Natural sciences so often shown us that the physical laws which seems to govern the organized natural phenomena (such as the bee-hives) appear to be relevant to explain the behavior of any living systems, that we have easily accepted to use the principles of energetics (the study of the general processes of interactions between physical matter and energy) for studying the behavior of any complex adaptive system ; to study, and, more generally, to design, to engineer and to manage our socio-technological organizations.
Â
1. THE ENERGETICS PARADIGM AND THE PRINCIPLE OF LEAST ACTION
Â
Â
Â
The metaphore of the perfect organization of the bee-hives (or of the flights of migrating birds) is often evoked to highligth the general efficiency of the principles of the energetics paradigm. The great Darwin spoke of « the bee’s architecture – or organization – as the most wonderful of know instincts… For the comb of the hive-bee, as far as we can see, is absolutly perfect in economizing labour and wax » (quoted by d’Arcy-Thompson, 1917-1969, p. 14). The fact that the « hexagonal form of the cell terminated by three similar and egal rhombs » appears to the mathematicians as the form which minimize the amount of wax needed to store a given amount of honey, appears as a sort of empirical validation of the metaphore : a perfect organization is an organization which obey to the principle of minima, « principle which culminated in the principle of least action wich guided eighteenth century physics, was generalized (after Fermat) by Lagrange, inspired Hamilton and Maxwell, and reappears in the latest developments, of wave-mechanics » (d’Arcy Thompson, 1917-1969, p. 111).
Â
Â
This principle of least action (also known as the principle of natural economy, or as the principle of minima or of maximin, or as the principle of universal parsimony) is generally considered as a general principle of energetics (and of natural sciences) : Initially « in the air as a guiding postulate, an heuristic method« , from Pappus to Euler, it become, at the end of the eigtheens century , a « natural principle » on which it was necessary to base the theorical and practical developments of all the engineering sciences. Not only in mechanical or civil engineering, but also in organizational engineering : the french « Saint-Simonien » and the british Ch. Babbage were amongst the best known pionners of the general application of the principle of least action to the design and the management of manufacturing organizations, less than a century before the impressive development of the taylorism along the twentieth century.
Â
Â
Let us call this principle of least action « the second principle of energetics« , although the manual of physics give this name only to a specific occurence of it, the entropy principle (which is alo a minimization principle). (The universally known first principle of energetics is the principle of mutual conservation between matter and energy, and is not discussed as long as only matter and energy are concerned !).
Â
Â
Â
2. THE INFORGETHICSÂ PARADIGM : THE BASIC LOOP « INFORMATION-ORGANISATION »
Â
Â
Â
It apparently needs a long time before that the scientific researchers working in the areas of organization and information sciences realize that the energetics paradigm was probably not relevant to understand these rather fascinating phenomenas they recognize in social organization, be they manufacturing or business or administrative organizations. I think that the first signal come from the famous thesis of H.A. Simon, « Administrative Behavior, a study of the decision-making processes in administrative organization » (1947-1977), showing that the observed behavior of social organization, and, more generally of complex adaptive systems, was not, and probably cannot be, an « optimizing behavior » trying to minimize the « energy » needed to produce a given output. Although they often process matter and energy, and, as such, obey to the physical laws of energetics, it appears that the behavior of complex adaptive systems cannot be seen as a process of interactions of matter and energy. Empirical observations easily show that such matter-energy-interactions doesn’t really occur in the behavior of complex systems such as, for instance administrative organizations !… And they also show that such organizations appear as interactions between informations processes and decision-making processes : the design and the management of those social organizations beeing the permanent search of some feasible and presumably satisfactory behaviors. Feasible, satisfactory, but not « energy optimizing » (nor « Subjective Expected Utility maximizing« , will argumente later H.A. Simon. See, for instance, 1978, Nobel lecture, in 1982, and 1983).
Â
Â
Twenty years later, the number of signals had grown. Can we remenber, for instance, a lecture of G. Bateson (1970-1972), concluding that « any endeavor to built a theoretical framework for the sciences of organization and of communication based on the theory of energetics is non-sense »Â (p. 209 in vol. 2 of the french translation, 1980). Non-sense because the « output » of the organizational process is not first matter or energy, but non natural artifacts, symbol system or information ; and because one of the usual output of the informational process is not first matter or energy, but a man-made-concept, the concept of organization, will add the first researchers working on the theory of self-organizing systems  (H. Von Foerster, 1959 ; H. Atlan, 1972 ; E. Morin, 1977, 1980). A new paradigm is emerging from those discussions, which focus on the conceptual interactions between information and organization (usually mediated by the decision-making processes) ; and no more on the « objective » interactions between matters and energy. I have elswhere (see Le Moigne 1991) titled this new paradigm, the « inforgethics paradigm » to relate the concepts of INFormation and of ORGanization, the last one beeing itself etymologically related to the concept of ERG included in enERGetics : ERG evoking the result, the work or the labour, ORG evoking the process, the organism processing this result, and the suffix « ethic » evoking the ethical behavior of any decision-maker.
Â
This paradigmatical shift, initially suggested to understand the socio-organizational process, implies some important epistemological discussions which cannot be presented and commented here. The powerfull epistemological works of Jean Piaget (see for instance 1967), of H.A. Simon (1969-1981) and of E. Morin (1977, 1980, 1986, 1990), give us to day a solid conceptual basis to argumente it and to relate it to its long and rich history, from Aristote or Archimede to Leonard de Vinci, and from G.B. Vico to P. Valery or A. Bogdanov, (see for instance J.L. Le Moigne, in E. Andreewsky, Ed. 1991, and in F. Tinland 1991). And many contempary researchers are to day working to developp it in the general framework of the new sciences of complexity (E. Von Glasersfeld, 1981 ; P. Watzlawick, 1981 ; F. Varela ; 1979…).
Â
Â
Â
3. THE PRINCIPLE OF GENERAL INTELLIGENT ACTIONÂ Â
Â
Â
Â
Our main aim here is to identify some of the basic principles supported by such a new paradigm, in order to enrich our contemporary intelligence of the complex adaptive systems (such as socio-technical organization). The first one has been more and more studied since the announcement of the somewhat paradoxal principle of « order from noise » by H. Von Foerster in 1959. (It appears as a paradoxe for the energetics paradigm, but it is becoming a founding principle for the inforgethics paradigm). It becomes « order from disorder« , then « complexity from disorder« , then « organization from information and information from organization« , or « Eco-Auto-Re-Organisation« ), it appears to be one of two faces of the same piece, the other one beeing the « principle of equilibration » formulated earlier by J. Piaget : the principle of equilibration of a system says that its behavior is a permanent constructed trade off between its two forms of strategy, assimilation and accomodation  (« to assimilate things through schemes and to accomodate schemes to things » J. Piaget 1937-1977, p. 328).
Â
Â
We shall focus here on (what we propose to call) the second principle of the inforgethics paradigm.. Although it has many roots in the works of J. Piaget, it has been explicitely formulated by H.A. Simon (often in cooperation with A. Newell), progressively emerging from his work on the decision-making processes in social organisation, and mainly presented in his articles devoted to Artificial Intelligence (Turing Lecture, 1976) and to Sciences of Cognition (1980). We shall see that the argumentation of this concept of « Intelligent Action » finds also some key roots in the early work of J. Piaget (1937-1977). And we shall propose some extensions of this general principle taking in account the self-organizing, or recursive, capabilites of an intelligent system.
Â
Â
Let us start with the initial definition of a general intelligent action proposed by A. Newell and H.A. Simon (Turing Lecture, 1976) : « By general intelligent action, we wish to indicate the same scope of intelligence as we see in human action : that in any real situation , behavior appropriate to the ends of the systems and adaptative to the demands of the environment can occur, with some limits of speed and complexity« .
Â
Â
If we consider that social organizations are usually able to exhibit general intelligent action, that mean that we consider that they are also able to design and to manage (that is to say : to engineer) some forms of behavior which have two inseparable characteristics :
Â
Â
- It is appropriate to the ends of the organization, that implies some sort of finalizing or teleological subsystems.
Â
Â
- It is adaptive to the demands of the organizational environment, that implies some internal devices which permit to « percieve » the constraints that the environment will perhaps oppose to the intentional actions of the organization.
Â
Â
Appropriatness, Adaptivity… We verify that « Intelligence is Adaptation« , as say J. Piaget (1935-1977, p. 10), or that « Intelligence is closely related with adaptivity (- with problem solving, learning, evolution) » as say H.A Simon (1980), who add : « A science of intelligent systems has to be a science of adaptive systems, with all that entails for the difficulty of finding guenuine invariants … So long as we do not confuse adpatability with the ability to attain optimal solutions, cognitive science will be, basically, empirical science. Inference from optimality conditions will play a modest role in helping us to discover how things are« .
Â
Â
In other terms, the optimizing behavior define by the guiding « principle of Least Action« , will not usually be the intelligent behavior of any complex adaptive system : many empirical evidences and some more conceptual considerations confim this argument : at any moment, an organization appears to have many ends (and usually, evolving ends), so that it cannot rationally compute « the optimal » behavior which doesn’t exist. It always faces many « satisficing » or adequate feasible behaviors, and it doesn’t exist any unique rational criteria to select « the best ». Many ends, and, at the same time many « ill-structured »Â representations of the environments of the organization, usually percieved as uncertain, complex and evolving : amongst those tangled representations, more or less associated with the tangled ends of the organization, it rarely appears any stable and unique trade-off from which the optimizing behavior would be computed. And more generally, the complex adaptive system lacks the computation and the attention ressources which would permit it to determine all the alternative courses of action amongst which it would select « the best » one.
Â
Â
Nevertheless, « General Intelligent Action is a form of behavior we can recognize by its effects, wether it is performed by human or not » (A. Newell and H.A. Simon, Turing Lecture, 1976)… performed for instance, by social organization.
Â
Â
The question is now : how complex systems such as social organization can effectively engineer an « intelligent behavior » ? In his Nobel Lecture, H.A. Simon (1978) observe that « once a theory is well entrenched, it will survive many assault of empirical evidence that purports to refute it unless an empirical theory, consistent with the evidence, stands ready to replace it » (Models of Bounded Rationality, 1982, vol. 2, p. 490). The guiding Principle of Least Action seen as a tool for determining the optimizing behavior of the organization has survive despite « many assaults of empirical evidence« , as long as another guiding principle of organizational behavior engineering stand ready to replace it. We argue here that the « Principle of Intelligent Action » developped by Artificial Intelligence and sciences of cognition since twenty year must and can replace the « heuristic method of the minima and maxima postulate« .
Â
Â
This alternative method of « Artificial adaptive or intelligent systems » has been developped and illustrated in many ways since the seminal article of H.A Simon : « A behavioral model of rational choice » (1955). Let us quote another recent article of H.A. Simon and A. Vera which sumarize the operating core of the principle : « The hypothesis that intelligent behavior is the product of systems that can handle patterns of arbitrary variety and complexity, that can construct complex structure of such patterns, and store and modify such structures in memory ; that can input such patterns through the encoding of sensory information and output them through the innervation of motor neurons; and that can compose patterns behaving one way if the patterns match, another way if they do not.
It is the ability to perform these functions, the functions of a physical symbol systems, that provides the necessary and sufficient condition for behaving intelligently – responsively to the needs and goals of the organism and to the requirement imposed on it by the environment - (in « Situated Action : a symbolic interpretation », 1993).
Â
Â
Â
4. THE RENEWAL OF ORGANIZATIONAL ENGINEERING
Â
Â
Â
If we consider that those very general procedures formulated for living organism can also be formulated for complex adaptive systems such as social organizations, we can identify at least three operating procedures for their elaboration of their intelligent or adaptive behavior : the engineering of pattern symbolization, the engineering of heuristic search and the engineering of teleological or recursive perception.
Â
Â
Â
4.1 The engineering of behavioral symbolization
Â
Â
The concept of pattern symbolization is derived from the well known « Physical Symbol Hypothesis » proposed by A. Newell and H.A. Simon (Turing Lecture, 1976) : « A physical symbol system has the necessary and sufficient means for general intelligent action » (to have the means doesn’t mean that it will use intelligently those means !). In terms of organizational engineering, the question becomes : how a complex adaptive system can generate adequate patterns of symbols through its own complex actions in an ill-structured context, trying to reach evolving and multiple goals ? Empirical evidences show that all business and administrative organizations always produce such pattern of symbols. It also proves that « a physical symbol system is a machine that produce through time an evolving collection of symbol structures« ,assuming the general definition of the concept of symbol system, seen as a complex artifact which not only produces patterns of symbols but also « both designate (to access to the process) and interprete » objects seen as processes (Invoices, order forms, customer accounts, personal files and many others organizational data bases, are familiar examples of such pattern symbolization developped by social organizations). But a question remains : is the complex adaptive system enough aware of the enrichment of the representation (through symbol systems) of its own activities in its various (well and ill-structured) contexts ? Not only « to construct complex structure of such patterns« , but also « to store and modify such structures in memory« .
Â
Â
The engineering of symbolization is necessarily an engineering of organizational memorization (see J.L. Le Moigne 1991). And evidence show that the quality of the organizational memorizing processes often determine the quality of the intelligence (or adaptativity) of behavior of the organization. « One structural requirement for intelligence is the ability to store and manipulate symbols » say A. Newell and H.A. Simon (1976), paraphrasing W. MC Culloch : « What is a symbol, that intelligence may use it, and intelligence, that it may use a symbol ? » The first main recommandation which can be infered from the Principle of Intelligent Action for engineering social organizations is perhaps to developp and to enrich their own symbolisation capabilites. They often need « more subtle symbols« as said P. Valery, if they must remember that « symbols lie a the root of intelligent action » (A. Newell – H.A. Simon, 1976).
Â
Â
Â
4.2 The engineering of decisional heuristic search
Â
Â
« Knowing that physical symbol systems provide the matrix for intelligent action doesn’t tell us how they accomplish this » recall A. Newell and H.A Simon (1976) ; empirical evidence, here too, suggests that organizations exhibit their cognitive behavior finding and solving problems by « heuristic search » : « A physical symbol system (for instance a social organization) exercices its intelligence in problem solving by search – that is by generating and progressively modifying symbol structures until it produces a solution structure.«
Â
Â
The ability of a complex adaptive system to invent through learning, adequate heuristic rules, appropriate abductive reasonings, relevant dialectical topics (be they empirical receepees) reveals the second face of the guiding principle of organizational intelligent actions. In brief, it tell them that it is more important for the intelligent system to think about « how to decide what to do » (H.A Simon, 1977), than to compute « what to decide to do » ; to think about the « meta-knowledges » (J. Pitrat, 1990) of the system, that is to say knowledge about organizational knowledge, or more specifically, knowledge as rules to use for and sometimes to invente knowledge ; to think about the rules it use or can use symbol manipulation, be they empirical know-how, implicit common-knowledge (see E. Andreewsky et Al, 1992) or explicit topics developped for the argumentation of any rhetorical « inventio » (Aristote) or some « natural logic » reasoning (see J.B. Grize, 1990). H.A. Simon has summarized, in chapters 5 and 6 of « the sciences of the artificial » (1991), a sort of typology of the processes through which a system can formulate plausible heuristic searches in a given semantic context, focusing on the general procedure of the « means-ends-analysis » (M.Minsky, twenty years earlier, proposed a first typology focusing on the « hill-climbing » procedures, which are probably more directly inspired by the principle of least action).
Â
Â
The « mean-end-analysis technics« , for which « the critical question is «what shall be done next ? » » (A. Newell and H.A. Simon, 1976) suggest a strong attention  to the evolving ends of the searching processes :
« On the whole, the scarce factor in our decision making is not information but attention – What we attend to, by plan or by chance, is a major determinant of our decisions. The general scarcity of attention suggest that people and organizations can enhance the quality of their decision-making by searching systematically, but selectively, among potential information sources to find those that deserve most careful attention, and that might provide items for the agenda. This is a major function of so called « intelligence units in organizations… » (H.A. Simon 1988).
Â
Â
But the searching process is not the exclusive responsability of an « intelligence » unit. The whole system is concerned by its capabilities for « problem formulation and for alternative courses of action generation ». To do so, it must use all its cognitive ressources, and not only those of the deductive formal logics. The social organization must admit that it is always many plausible behaviors, at any moment, which can lead it to an expected adaptive behavior ; if « we measure the intelligence of a system by its ability to achieve stated ends in the faces of variations, difficulties and complexities posed by the task environment » (A. Newell and H.A. Simon 1976), we must admit that this « consistency can be reached though an infinity of diverse procedures » (J. Piaget 1935-1977, p. 345). The fact that the search for an organizational intelligent behavior can be help – or eventually guided – by some computerized heuristics programms (such as expert systems or knowledge-based systems), don’t modify this fact. Such things as « the perfect rational choice » which must be rationally accepted by all the rational participants of a social organization doesn’t generally exist in practical conditions (see J.L. Le Moigne, 1989, an 1990 b). So instead of trying to find it through optimization algorithms, organizations may developped their own capability for diversifying their heuristic search procedures in order to increase their aptitude to invent adequate intelligent behaviors : that is, to developp their own engineering for heuristic search (before their engineering for programming it in good software !).
Â
Â
Â
4.3 The engineering of recursive perception or of self-observing organization
Â
Â
E. Von Glasersfeld (1981) has often quoted a rather provocative argument of J. Piaget, which concern not only the developpment of child intelligence, but also any evolving complex system or social organization. « Intelligence organize the world in its own self organizing process » (J. Piaget, 1935, p. 311). Or, more briefly again, « Intelligence is self elaborating » (J. Piaget, 1937-1977, p. 313). The understanding of the self-referential foundation of the intelligence of a complex system has gain, since half a century, some strong empirical and theoretical confirmations, and it has often contribute to some new understanding of the recursive reasoning and to some related progresses of computer sciences. But our social cultures are still not easily accoutumate to the current practices of such bootstrapping or « strange-looping » reasoning ! J. Piaget (1937-1977, p. 366) studying the development of « inventive intelligence » (the core of cognitive intelligence, in all the phases of its developpment), has shown that it always need, at any level, effective « experience », or « subject activity », or conscious behavior. The result is not known, but the process is known. To be conscious, the system behavior must be a self-observing behavior.
Â
Â
J. Pitrat (1990) has stated this conclusion in an article titled : « An intelligent system must and can observe its own behavior » and he has show its consequences in terms which can be directly applied to the behavior of a social organization : it may build models of itself to plan and control its future behavior. It may monitor its problem solving processes… to decide wether to carry on with its present attempt ». It may understand what it is doing, and so, learn from its successes and failure.
Â
Â
To do so, the system must « choose what is observed » and what would have to be observed ; this active observing process leads the organization to identify more explicitely, at each step of its behavior, its own aims and goals, and to developp its own teleological or finalizing process.
Â
Â
And it has to build some specific methods to observe itself and to interprete its observation : in other terms, it has to develop its own engineering of symbolization and memorization : « It is easy to create a trace, conclude J. Pitrat, but it is difficult to use it, particularly if one has to do it while one is observing its own behavior« . This difficulty appears less important in practice than in theory, at least for a social organization, if it has explicited such specific project of self-observations amongst its main aims.
Â
Â
« Could a machine ever be conscious ? « This old question leads M. Minsky to another old question » Could a person ever be conscious ?  But in the same question, we have now a decisive part of the answer : If he could, he has to know how to be conscious : by self-observing his own behavior. Let us now move to an even more complex question : « Could a social organization be conscious ? We certainly shall never know a clear answer to it ? But we know now that if the organization behave itself in order to observe (to intentionally represent and memorize through system of symbol, and to transform through heuristic search) its own behavior, it will have more chance to exhibit an intelligent or adaptive behavior in its complex and evolving environment.
Â
Â
5. INTELLIGENT ORGANIZATION : ARCHITECT OR BEEÂ ?
Â
Â
Â
Seen at least as a guiding heuristic method, can’t we consider that the Principle of Intelligent Action appears to be rather relevant for the (artificial) engineering of social organization : engineering of symbolization, of heuristic search, of recursive observation… We identify three ways of engineering which can contribute to the development of Intelligent System.
Â
As a concluding metaphore, perhaps may we remember here the well known parabole of « the Bee and the Architect » counted by K. Marx ?
Â
Â
« The perfection of the geometrical form of the bees-cells challenges the cleverness of most architects. But the superiority of the less skilled architect on the most expert bee lies in the fact that he designs(or engineers) the cell in his own mind before to build it in the hive » (K. Marx – The Capital, p. 786, my translation from a french Edition).
Â
Â
To design its own project before to act, isn’t a rather good heuristic-rule for the design and control of an intelligent behaving system ?
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
Â
REFERENCES
Â
Â
Â
ATLAN H. (1972), « L’organisation biologique et la théorie de l’information », Ed. Hermann – Paris .
Â
ANDREEWSKY E. et al. (1991), « Systémique et cognition », Ed. Dunod – Coll. AFCET-Système – Paris . 194 p.
Â
ANDREESWKY E. et al. (1992), « Connaissances implicites et connaissances explicites » N° Spécial de Revue Internationale de Systémique, vol. 7, N°1-2, .
Â
ARCY (D’) WENTWORTH THOMPSON (1969), « On growth and Form », (1917) Abridged Edition by J.T. Boner, the Univesity Press, Cambridge, UK.
Â
BATESON G. (1972), « Steps to an ecology of mind », Chandler Pub. Cy, N.Y.
Â
FEIGENBAUM E. and FELDMAN J. (1963) (Ed.), « Computers and thought », Mc Graw Hill, N.Y.
Â
GRIZE J.B. (1990), « Logique et Langage », Ed. Ophrys, Paris.
Â
LE MOIGNE J.L. (1989), « Natural and artificial computing and reasonning in economic affairs » in « Theory and decision », vol. 27, N°1-2, pp. 107-117.
Â
LE MOIGNE J.L. (1990), « La modélisation des systèmes complexes », Ed. Dunod, Paris, 178 p.
Â
LE MOIGNE J.L. (1994), « La théorie du système général, théorie de la modélisation », 4e éd. complétée, éd. PUF, Paris, 330 p.
Â
LE MOIGNE J.L. (1990 b), « Intelligence artificielle et raisonnement économique » in « Monde en développement, T.18, N°72, pp. 11-18.
Â
LE MOIGNE J.L. (1991), « Quelle épistémologie pour une science des sytèmes naturels «qui sont avec cela artificiels» ? in F. Tinland Ed..
Â
LE MOIGNE J.L. (1991 b), « Sur les fondements épistémologiques de la science de la cognition » in E. Andreewsky, Ed. : « Systémique et Cognition », 1991.
Â
MORIN E., « La méthode – Tome 1 : « La nature de la nature » 1977), Tome II : « la vie de la vie » (1980), Tome 3 : « La connaissance de la connaissance », Ed. du Seuil, Paris, 1986.
Â
MORIN E. (1991), « La méthode » – Tome 4 : « Les Idées, leur habitat, leur vie, leurs moeurs, leur organisation », Ed. du Seuil, Paris, 260 p.
Â
NEWELL A. and H.A. SIMON (1976), « Computer science as empirical inquiry : symbols and search » , Communication of the ACM, March, vol. 19, Nb 3, pp. 113-126.
Â
PIAGET J. (1937), « La construction du réel chez l’enfant » Ed. Delachaux et Nieslté, Neuchatel.
Â
PIAGET J. (1937-1977), « La naissance de l’intelligence chez l’enfant », Ed. Delachaux et Niestlé,Neuchatel.
Â
PIAGET J. (1967), « Logique et connaissance scientifique » Ed. Gallimard, Encyclopedie de la Pléiade, Paris.
Â
PITRAT J. (1990), « Métaconnaissances, futur de l’intelligence artificielle », Ed. Hermès, Paris .
Â
SIMON H.A. (1945-1976), « Administrative behavior – A study of decision making process in administrative organization » . Third Editions, Expanded, (1976), The Free Press, Mac Millan N-Y, (1976).
Â
SIMON H.A. (1969-1981), « The sciences of the artificial », 2e Edition The Mit Press, Cambridge, Mass .
Â
SIMON H.A. (1978 Nobel Lecture), « Rational decision making in business organization », in H.A. Simon, « Models of bounded rationality », 1982, pp. 474-494.
Â
SIMON H.A. (1980), « Cognitive science : the newest science of the artificial », Cognitive science, 4, pp. 33-46.
Â
SIMON H.A. (1982), « Models of Bounded rationality » (2 vol.) the MIT Press, Cambridge, Mass.
Â
SIMON H.A. (1983), « Reason is human affairs », Stanford University Press, CA, 115 p.
Â
TINLAND Ed. (1991), « Systèmes naturels et systèmes artificiels » Ed. Champ Vallon, 01420 Seyssel.
Â
VARELA F.J. (1979), « Principle of biological autonomy » North Holland Pub., N.Y.
Â
VERA A.H. and SIMON H.A. (1992), « Situated action : a symbolic interpretation » in Cognitive Science, 1993, 17, 7-48.
Â
VON FOERSTER H. (1981, 1984), « Observing systems », Intersystems publications, Seaside, Cal.
Â
VON FOERSTER H. (1959), « On self organizing systems and their environments », in « Observing Systems » 1984, pp. 12-36.
Â
VON GLASERSFELD E. (1987), « The construction of knowledge », Intersystems Publications, Salinas, Cal., USA, 1987.
Â
WATZLAWICK P. (Ed) (1988), « L’invention de la réalité – contribution au constructivisme », traduit de l’allemand, 1981, Ed. du Seuil.
Â
Â