For the purposes of our discussion where we want to include what is commonly described as an "organism" as well as what is understood by the terms machine or "mechanism" in the narrower sense, it will be advisable to avoid the latter two expressions and to employ the more neutral expression: "system" in the sense of a coherent structure of causally connected physical parts. The term system will thus be used here roughly in the sense in which it is used in von Bertalanffy's "General System Theory". We need not enter here into all the difficulties of a precise definition of this term and may content ourselves with saying that by a system we shall throughout understand a persistent structure of coherent material parts which are so connected that, although they can alter their relations to each other, and the system thereby can assume various states, there will be a finite number of such states of which the system is capable, that these states can be transformed into each other through certain orderly sequences, and that the relations of the parts are interdependent in the sense that, if a certain number of them are fixed, the rest is also determined.
Hayek "Within Systems and Between Systems"
Upon receipt of pulses in certain combinations and synchronisms it will be stimulated to emit a pulse of its own, otherwise it will not emit. The rules which describe to which groups of pulses it will so respond are the rules that govern it as an active organ.
This is clearly the description of the functioning of an organ in a digital machine, and of the way in which the role and function of a digital organ has to be characterized. It therefore justifies the original assertion, that the nervous system has a prima facie digital character.
. . . The above description contains some idealizations and simplifications, which will be discussed subsequently. Once these are taken into account, the digital character no longer stands out quite so clearly and unequivocally. Nevertheless, the traits emphasized in the above are the primarily conspicuous ones. It seems proper, therefore, to begin the discussion as I did here, by stressing the digital character of the nervous system.
John von Neumann, The Computer and the Brain
On more than one occasion, F. A. Hayek recalled his shared interest with John von Neumann in the human mind and the role of the nervous system in supporting perception. Following up on issues of complexity that were made clearer to him during the writing of The Sensory Order, Hayek notes in "Within Systems and Between Systems" that:
The field in which the general properties of a class of phenomena are derived from the general principle by which they are defined, a field to which indeed our particular problem belongs, is the general theory of machines or, as J. von Neumann has recently called it, the "logic of automata". This theory also is able, on the one hand, to indicate the range of phenomena which can be produced, and therefore be accounted for, by the operation of certain basic principles, and, on the other hand, to show that certain other conceivable operations lie outside the range of what a machine so defined can possibly do. The general theory of classifying machines in particular, with which we shall be concerned, may not be sufficient to specify in detail the operation of any such classifying machine of a sufficient degree of complexity to be of practical interest; but it will indicate the range of phenomena which may be produced by machines of this type. (364)
Theorists have a tall order. They are tasked with describing the world in abstract. That is, they must define the world in terms of a hierarchy of categories that subsume one another. The abstractions also interact with one another in a consistent manner. Or perhaps it would be better to say that particular instances of more abstract classes interact in a manner that is defined by their identity in the class. If a theory is good, objects and systems observed in the empirical world will tend to conform to the description of theory with regard to the status attributions of that theory. This also means that the theory should have some predictive power. This is just as well true of the theories guiding the actions and interpretation of autonomous agents. Increased predictive power seems to move us toward the realm of determinism. But, it seems we have little choice but for our models of the world to have some element of determinism since models of the world represent claims about logic processes that apparently govern the evolution of the world.
Hayek referred to a system of "multiple classification" where "[t]he classifying system may in this sense be regarded as embodying a theory of the external world which enables it to predict (=produce the representative of) the events which the former will cause in the environment (366)." Hayek recognizes that this system of classification bears resemblance to the nervous system. However, in "Within Systems", Hayek constrains his focus to systems where the states are clearly defined. Thought the states of the system evolve in the sense that states change over time, the rules of the system are not evolving.
Hayek's focus on the relatively simple case of a systems without creative evolution is not due to a lack of concern. The archetypal system of "multiple classification", the nervous system, generates novelty in response to data that does not conform to its prior classificatory scheme. In his Sensory Order, F. A. Hayek presumes that the nervous system actively responds to changes in the environment in order to tune the system to the environment. He explains that the nervous system is part of three adjacent orders: those of the external world, the nervous system, and of mental activity.
2.7 In the application of the concept of isomorphism to psychological problems there has been a good deal of confusion with regard to the terms of structures which might be said to be isomorphous. There are three such different structures, any pair of which might be and has been represented as the terms between which isomorphism prevails. There are:
1. The physical order of the external world, or of the physical sitmuli, which for the present purpose we must assume to be known, although our knowledge of it is, of course, imperfect.
2. The neural order of the fibers, and of the impulses proceeding in these fibres, which though undoubtedly part of the complete physical order, is yet a part of it which is not directly known but can only be reconstructed.
3. The mental or phenomenal order of sensations (and other mental qualities) directly known although our knowledge of it is largely only a 'knowing how' and not a 'knowing that', and although we may never be able to bring out by analysis all the relations which determine that order.
Hayek observes that the neurological order is responsible for making order out of the data of the external world. Of course, it also is part of and interacts with the internal world of the mental order. Identifying the link between these three orders is a major concern of The Sensory Order. In short, the nervous system is presented as a model building machine:
2.43 In the system of classification in which we shall be interested the different individual events will be the recurrent impulses arriving through afferent fibres at the various centres of the nervous system. For the purpose of this discussion we shall have to assume that these individual impulses possess no significant individual properties which distinguish them from one another. They must be regarded initially as what the logician describes 'uninterpreted set of marks'. Our task will be to show how the kind of mechanism which the central nervous system provides may arrange this set of undifferentiated events in an order which possesses the same formal structure as the order of sensory qualities.
Hayek argues that the neural order is isomorphic to the order of the external world and that this isomorphism transfers to the order of the mental (internal) world. Notice the tension that this creates for a theorist. In one case, the theorist wishes to define the constellation of system states, including a map of states themselves, in terms of attributes and functions, and a map of transitions between states. Yet, the system being described is, in an open ended fashion, continually generating and redefining models of the environment, internal and external.
We will have to live with this tension. As model builders, we necessarily place limits on model complexity. And, if data provided to the model is controlled, whether that data is artificial or derived from the empirical world, then avenues for creativity within the model are limited. Compare this to models of robot learning where robots are in an uncontrolled environment and must therefore must engage in open ended classification and, perhaps, construction of models to explain the apparently novel behavior of diverse classes of objects observed in the external world. The space of the problem quickly explodes. The student of society likely have much to learn from these robots concerning model formation, but here we concern ourselves with the logic of theory and the controlled environment of the simulation as our aim is to clarify the terms of a social theory informed by finite state machines.
So what are we to do? Instead of giving up due to the complexity of the problem, we could instead recognize that the general pattern of modeling described by Hayek conforms to the finite state machine. Consider his description of the nervous system which, at least indirectly, draws from the mechanized model of nervous activity presented by McCulloch and Pitts (1943) through their other work and the discourse to which their work contributed. Just as developers of artificially intelligent systems and frameworks supporting their development have done, I will take a decidedly deterministic view of modelling in our discussion. It is not that I am confident in the deterministic view of consciousness that robs human agent of freedom of will. Rather, it is from these deterministic systems that we have constructed the most sophisticated forms of intelligence that, in the view of Geoffrey Hinton, represent the best models of human intelligence available (Hinton refers to transformers and the large language models that they support.)
To the extent that outcomes do not appear to be described deterministically by these models, it is due to our own ignorance of the data and of the true model or meta model that governs the behavior of the system. Consider the double problem with modeling of human decision-making, that 1) we do not have direct access to the model governing agent decision-making and that 2) both data and the agent's model are themselves not directly accessible to the agent. Ignorance at every scale pervades our reality. Somewhere in this unknowability may exist free will, but there is no demand on the scientist to make a decision for or against the possibility of free will as an assertion in either direction is, not without irony, an article of faith.1
Modeling with Finite State Machines
Caldwell (2014) points out that the contributions from the "Within Systems" paper, which long went unpublished, were present across the work published in Studies in Politics, Philosophy, and Economics, and I think that "Degrees of Explanation", "Theory of Complex Phenomena", and "Rules, Perception, and Intelligibility" deserve special note in this respect. In these works, Hayek uses the lens of "organized complexity", as Warren Weaver termed it, to formulate the task of theory. The nature of relations in a complex system cannot be collapsed to point prediction generated by a set of inputs. There exists endogeneity in a system such that the states of actors are mutually contingent upon one another. How do we describe the relation of agents with mutually contingent states?
In "Degrees of Explanation", Hayek explains that:
By a scientific prediction we mean the use of a rule or law in order to derive from certain statements about existing conditions statements about what will happen (including statements about what we will find if we search at a particular point). Its simplest form is that of a conditions or "if then" statement combined with the assertion that the conditions stated in the antecedent are satisfied at a particular time and place (214).
This sort of conditional statement underlies prediction of theory just as much as it underlies the operation of a finite state machine. A well formed conditional statement should refer to data that meaningfully impacts the state of the relevant system, whether that system is an autonomous agent, is part of such an agent's own composition, or includes that agent as a part of itself. Hayek continues:
The position will here be that if we already knew the relevant laws, we could predict that if several hundred specified factors had the values $x_1$, $x_2$, $x_3$, . . . $x_n$, then there would always occur $y_1$, $y_2$, $y_3$, . . . $y_m$. But in fact all that our observation suggests may be that if $x_1$, $x_2$, $x_3$, and $x_4$, then there will occur either ($y_1$ and $y_2$) or ($y_1$ and $y_3$) or ($y_2$ and $y_3$) or some similar situation (214).
Hayek goes on to consider that the relations between these variables could be sufficiently complex that "[t]here may bve no possibility of getting beyond this by means of observation, because it may in practice be impossible to all the possible combinations of the factors $x_1$, $x_2$, $x_3$, . . . $x_n$ (214)." Hayek views the resultant complexity as promoting a situation where we are unable "to invent new hypotheses from which we can deduce predictions for situations we have not yet observed." Hayek's solution is that, when faced with the limitations of empirical modeling, "we shall here have to proceed in our deductions, not from the hypothetical or unknown to the known and observable, but as used to be thought to be the normal procedure - from the familiar to the unknown." Hayek seems to be writing about this problem with concern for the adoption of the scientific method in the social sciences.
While this might seem like a retreat to a scholastic perspective, Hayek's concern drives
him in the direction of complexity that is consistent with the modeling of agents and systems
as finite state machines. In some respects, you might think of these systems as being deductive.
However, from the perspective of generative modeling, it is more appropriately called abductive
reasoning where, by knowledge of the rule hierarchies governing the model and its agents,
one can formulate a general explanation of the observed outcomes. Even without access to these
methods, however, Hayek provided general explanations of system behavior via construction of
consistent explanation of agent interaction through an explanation focused on communication
between agents.
Despite not having the tools required to empirically demonstrate complex systems, the concepts and structures employed by Hayek were precisely those of computational theory. It should not be surprising, then, that Hayek uses the language and structure of finite state machines. Nicholas Vriend describes Hayek's perspective concerning decision-making as being consistent with the nature of decisions made by classifier systems. These are systems that are composed of rules to classify data and make decisions and that can be refined over time through variation and competition (Vriend 2002). However, this emphasis on classifier systems, while an attractive feature of Vriend's explanation, could be given greater generality by showing that Hayek's theorizing in consistent with the more general perspective of finite state machines.
A finite state machine is a system that, like a classifier system, is composed of rules used to interpret data and motivate a response. We can think of the finite state machine as a more elementary and more general concept that includes the classifier system as a particular case. Finite state machines can embody an array of discrete states. These states can be mapped in light of the possible state-to-state transformations that a finite state machine can manifest. Finite state machines are deterministic, meaning that the state of the machine at any given time, $t$, is determined from the machine's state at $t-1$ in light of the input received in the same period. In his book, Computation: Finite and Infinite Machines, Marvin Minsky describes "finite state machines or finite automata" interchangeably. He describes such a system as having "simple relation between their structure and their behavior." He continues that "Once we have (1) the description of an automaton, (2) its initial conditions (state), and (3) a description of the signals that will reach it from its environment, we can calculate what its state will be at each successive moment."
There is equivalence between the terms, "finite state machine" and "finite automata". This makes obvious that John von Neumann was working in the same tradition described by Minsky since it was Neumann who published the first paper on cellular automata in 1948 at the Hixon Symposium! There he describes that cellular automata have the following properties:
It possesses a finite number of states, which need be prima facie characterized only by stating their number, say $n$, and by enumeratingthem accordingly: $1$, $2$, . . . $n$. The essential operating characteristic of the automaton consists of describing how it is caused to change its state, that is, to go over from a state $i$ into a state $j$. This change requires some interaction with the outside world, which will be standardized in the following manner. As far as the machine is concerned, let the whole outside world consist of a long paper tape. Let this tape be, say, 1 inch wide, and let it be subdivided into fields (squares) 1 inch long. On each field of this strip we may or may not put a sign, say, a dot, and it is assumed that it is possible to erase as well as to write in such a dot. A field marked with a dot will be called a "1," a field unmarked with a dot will be called a "0." ( Neumann 1948, 313)
Here, von Neumann describes the structure of a theoretical computing machine. Not coincidentally, Neumann was responsible for organizing the funds for and overseeing the development of the earliest computers during the 1940s ( Aspray 1990, 29-94). And his name is given to the architecture that underpins modern computing devices (with, for example, adders, processing units, short and long term storage, etc...). Some readers may notice resemblance to the Turing machines which kick-started the age of modern computing. We can describe a Turing machine as a finite state machine with infinite memory. Thus, even in terms of theoretical computing, finite state machines are especially general abstractions. Finite state machines might interact with other finite state machines through communication cooperation, or competition. A video game is a multitudinous finite state machine, by which I mean that it is a finite state machine composed of various finite state machines. The McCulloch and Pitts neuron is finite state machine, as is an adder. And one might model the human brain as a finite state machine.2.
Having reviewed finite state machines, we can better situate Hayek's description of complex systems. In "Within Systems", Hayek describes the interaction of two systems, which we can think of as animals:
49. . . . Let us assume two similar systems so constituted that they are capable not only of hunting a moving object (which we will call the prey) but also showing symbols of each class they form in observing the movement of their prey and the environment, and of taking appropriate notice in doing so of events which they cannot observe directly but which affect them only indirectly via the symbols shown by the other system. . . .
50. There can, in the first instance, be little questions that machines could be constructed which in certain states (correspond to the "intention" of catching the prey) will take such action as their internal representation of the environment indicates as leading to the capture of the prey. (Hayek Unpublished, 380)
Hayek prepares the reader for this discussion earlier in "Within Systems", noting that:
Provisionally we shall define this by saying that a description has been communicated by one system $S_1$ to another system $S_2$, if $S_1$, which is subject to certain actions by the environment which do not directly affect $S_2$, can in turn so act on $S_2$ that the latter will as a result behave, not as $S_1$ would behave under the influence of those causes, but as $S_2$ itself, in its peculiar individual position, would behave if these causes have acted on $S_1$ had acted directly on $S_2$. (Hayek Unpublished, 363)
Drawing from the the nexus of literature motivated by Turing and Von Neumann, it is quite natural that Hayek's theoretical work would be consistent with a theory of finite state machines. He draws from the inventor of modern computing, concerns himself with questions of undecidability, and, for the sake of clear reasoning (see chapter 8 of The Sensory Order), and treats the nervous system as a classificatory machine.
There is much more to be said of the matter. Critically, we can locate the context of Hayek's work from The Sensory Order and Studies in Philosophy, Politics, and Economics in the broader context of the development of the development of computational theory. Further, we see that the question, "was Hayek an ACE", is a particular instance of the more general question, "did Hayek model agents as finite state machines?" Evaluating the first question, Vriend carefully responds:
It would be presumptuous to judge whether Hayek might have been an ACE, but it seems clear that ACE is social theory in a Hayekian tradition. And therefore a further exchange of insights would seem fruitful.
Vriend is correct that it would be asserting too much to claim that Hayek was an ACE. His theory is compatible with agent-baseed computation because, in the work presented here, he modeled agency in terms of finite state machines machines. A straightforward presentation of Hayek's theory and context suggests that, yes, Hayek modeled the world as being composed of finite state machines. In so doing, he was able to explain systematically the position of the autonomous human agent in society where knowledge is finite and divided. Economic coordination, and social coordination more generally, is a problem perfectly suited a to modeling by use of finite state machines. We need not concern ourselves with the problem of determinism, as no modeling form seems able to escape the need for selecting apparently optimal "policies" - actions - based on some set of beliefs about the world. These beliefs concern what is and what should be: interpretation and moral judgment. Logical consistency is required for modeling. The modeler does not claim to provide a model of the world, but rather, a model whose logic is a reasonable metaphor of the world. The model itself is a particular instance of high theory, much as a well informed narrative draws from high theory. So long as our applied models - simulations - are not given theoretical primacy, concerns about determinism are misplaced.
A caveat is in order. Hayek notes that "if a certain number of them [parts] are fixed, the rest is also determined (365)." It is possible that Hayek left some room for escape from determinism in the face of complexity. However, the consistent use of various form of determine, for example, in The Sensory Order gives the appearance that he was comfortable with this form of modeling without feeling the need to defend its implications with regard to freedom of will. At this point, I do not want to delve to further than I have as I intend to continue interrogating the text concerning this problem.
With regard to the distinction between high theory and deterministic modeling, it is useful to recognize that Hayek's theorizing adopts an abductive approach. A deterministic view enables evaluation of the quality of an explanation. These explanations are informed by some basic principles consistent with the deductive logic of a Misesian economic form, but it is the shift toward abductive reasoning that differentiates Hayek as he shifted focus toward the role of knowledge and information in enabling economic coordination.
Hayek did not discard the principles underlying human action, but he did feel the need to more rigorously explain how markets function. To be clear, I see no evidence that Hayek would deny the following formulation of purposive human action. A person seeks to transform the state of the world from the one that he or she experiences or expects to experience to a more preferred state. The particular details depend upon agent knowledge, preferences, and circumstances. Preferences may change as circumstance change since they are a function of circumstance. Or the structure of preferences (we might conceive as preference function) may change. If we seek to reason further about, say, systems level consequences of these prior assertions, we must constrain our reasoning using deterministic logic. The logic of finite state machines well suited for this purpose.
1
Hayek appears to hold a similar view. In a footnote from The Sensory Order on the discussion of free will, he reflects:
It may also be mentioned, although this has little immediate connexion with our main subject, that since the word 'free' has been formed to describe a certain subjective experience and can scarcely be defined except by reference to that experience, it could at most be asserted that the term is meaningless. But this would make any denial of the existence offree will as meaningless as its assertion. (193n1)
2
In fact, it is this perspective that enabled Hayek to draw from Kurt Gödel's incompleteness theorem to argue that a mind cannot fully comprehend its own order. Recognizing the mind as a finite state machine, we might also delimit the extent of human understanding with the corollary proof of incompleteness known as the halting problem.