If we can model the world using finite state machines and these machines, in practice, are deterministic, must we import a belief in determinism at a deeper level in our analysis? This is a difficult question 1) due to the limited extent of our knowledge and 2) due to what one might want to believe about freedom of will. Both of these challenges will play a role in discussion. I will base my discussion of the topic on what is described by Paul Lewis as a "computational argument" for a pragmatic dualism that treats emergent structures like consciousness as something that cannot be adequately described by its individual parts (Lewis 2017). I will show that even if we presume that the world is deterministic, various factors will necessarily limit the extent to which we can predict outcomes in complex systems. In accord with Hayek, what follows is not an argument for determinism but, rather, an argument for treating the world as deterministic in order to make modeling possible while simultaneously recognizing the epistemic limitations that limit the generalizability of inferences drawn from the model.
For those who fear that determinism equates to control, one might alleviate their disease by recognizing that if the world is deterministic, details that determine outcomes are not sufficiently known such that one could have perfect foresight. Neither are we able to compute, ahead of time, interactions that generate unexpected behavior. Complex interaction within even just the modeled aspects of the socio-economic systems limits the extent of prediction. This is why we must observe many iterations of a model to understand it. Or else just the description provided by the script would be sufficient for understanding the system.
Reality lacks a script that humans can access directly. Our world, physical, biological, and social, includes not only a vast reportoire of behaviors and their interactions, but also probabilities that appear to govern transitions between states within a system. And supposing that agents in the system attempt to intelligently navigate the environment, understanding of the environment must somehow integrate the agent's own intelligence. Finally, there is error in interpretation of data as well as error in the data itself that the agents use to interpret the environment.
Hayek recognized the challenges that complexity generated for predicting deterministic systems. Hayek appears not to have been bothered by determinism in modeling. In "The Theory of Complex Phenomena", Hayek observes that if the world operated in such a manner:
[t]he chief fact would continue to be, in spite of our knowledge of the principle on which the human mind works, that we should not be able to state the full set of particular facts which brought it about that the individual did a particular thing at a particular time. The individual personality would remain for us as much a unique and unaccountable phenomenon which we might hope to influence in a desirable direction by such empirically developed practices as praise and blame, but whose specific actions we could generally not predict or control, because we could not obtain the information of all the particular facts which determined it.
Notice that this is not a claim about the world, but rather, a conditional response. Even if the world is deterministic, humans still face epistemic limitations and we must, as theorists, recognize this epistemic limitation. Even one's own mind must remin a mystery to oneself.
Hayek was not alone. The genius who shaped the modern world perhaps more than any other, John von Neumann, believed that Godel's theorem placed strict epistemic limits on self-understanding (Von Neumann 1966, 47):
I am twisting a logical theorem a little, but it's a perfectly good logical theorem. It's a theorem of Godel that the next logical step, the description of an object, is one class type higher than the object and is therefore asymptotically [?] infinitely longer to describe. I say that it's absolutely necessary; it's just a matter of complication when you get to this point.
He continues again, a few pages later:
The formal logical investigations of Turing went a good deal further than this. Turing proved that there is something for which you cannot construct an automaton; namely, you cannot construct an automaton which can predict in how many steps another automaton who can solve a certain problem will acutally solve it. . . . it is characteristic of objects of low complexity that it is easier to talk about the object than produce it and easier to predict its properties than to build it. But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object.
When the editor of the manuscript, Arthur W. Burks, inquired with Godel about this interpretation, Godel advised that:
what von Neumann perhaps had in mind appears more clearly from the universal Turing machine. There it might be said that the complete description of its behavior is infinite because, in view of the non-existence of a decision procedure predicting its behavior, the complete description could be given only by an enumeration of all instances. . . . The universal Turing machine, where the ratio of the two complexities is infinity, might then be considered to be a limiting case of other finite mechanisms. This immediately leads to von Neumann's conjecture.
This need for greater complexity pushes the scientist into what Hayek refers to a s a "pragmatic dualism". But perhaps this was not pragmatic at all. Observing the similarity of the two on this issue, Wiemer notes that "von Neumann saw it was also an epistemological necessity to have a duality of description for all phenomena - especially those involving life or living systems (2021, 17)." For fear that I am not being clear, I repeat, deterministic modeling does not vanquish uncertainty. And the resultant emergent phenomena require that we abandon attempts to explain the operation of those phenomena with no more then general reference to related patterns that appear in microfoundations.
We can add to our understanding that we can, at best, transition between states in a complex system probabilistically. The problem that this naturally entails is that the probabilities of state transitions are not typically derived from deduction as in apriori probability such as occurs with counting cards from a fair deck, but rather, must be inferred from a large number of observations, thus putting us in the realm of statistical probability. And we are always at risk of experiencing surprise with regard to statistical probability.
The system under observation might surprise the average observer or may not be correctly modeled either in the structure of the system analyzed or the nature of the distribution that is supposed to represent outcomes generated by the system. We may model incorrectly. Our samples may not be representative of the system. And classes of events that have never occurred before may simply be outside of the realm of modeling (those unknown unknowns, the simple categorization that made listeners of Donald Rumsfeld laugh).
We can add to this yet another epistemic problem: error in encoding and transmission of encodings. That is, not only do we face risk and uncertainty with regard to events that directly impact outcomes of concern to a modeler, but information itself is recorded imperfectly either due to corruption of data or lack of a perfect match between the container representing data and the event encoded. I will not elaborate this further. It is sufficient here to note the frame problem exists at every level that information is recorded.
Now, let's suppose that the frame problem does not exist. Here, I would like to focus on the problem of information corruption. While modern machines seem to operate perfectly, components of a digital computer deteriorate over time and under pressure of circumstance. Further, the medium through which digital messages are transmitted may promote error at some rate. Thus, finite state machines, when they function as desired, hide the pervasiveness of this problem. I do not propose a solution, but in thinking about the validity of the metaphor of the finite state machines, it is critical that we consider the kinds of variation that impact decisions of the machine.