If you use neural networks, you may not realize that these were among the earliest forms of machine structures studied as interest in computing grew in the 1940s and 1950s. In the 1940s, there was growing interest in cybernetic systems. This early work attempted to frame such systems as deterministic machines that are guided by feedback derived over the course of operation. The most obvious of such systems is your thermostat. When you set the temperature of your thermostat governing your heating unit, you will find that so long as the measured temperature is below the temperature you have set, the heat will remain on. Once the temperature reaches the target, the heater turns off. Then, when the detected temperature falls below the target, the heater turns back on, and so on.
One of the great innovations of early research modeling the nervous system was to envision the nervous system as a deterministic machine. In their path-breaking work, McCulloch and Pitts modeled the nervous system so that knowledge of the system at time $t$ translated to knowledge of the future path of the system absent external interventions. Although researchers then and since are well aware that the interpretation of McCulloch and Pitts was a dimensionally reduced view of the system, their contribution allowed for the articulation of the nature of information transmission across and of learning by neural nets. And, conveniently for cyberneticists out of which grew artificial intelligence research, the McCulloch-Pitts neural nets was a Turing machine. Marvin Minksy reflects on the significance of the work of McCulloch and Pitts over two decades later:
It should be understood clearly that neither McCulloch, Pitts, nor the present writer considers these devices and machines to serve as accurate physiological models of nerve cells and tissues. They were not designed with that purpose in mind. They are designed for the representation and analysis of the logic of situations that arise in any discrete process, be it in brain, computer, or anywhere else. In theories which are more seriously intended to be brain models, the 'neurons' have to be much more complicated. The real biological neuron is much more complex than our simple logical units -- for the evolution of nerve cells has led to very intricate and specialized organs. At this point, the McCulloch-Pitts 'cells' or 'neurons' are quite sufficient for our purposes. Our present goal is only to indicate how, starting with a set of very simple elements, one can construct machines of all sorts. (Minsky 1967, 32)
Later in the same chapter, Minsky provides examples of programs that can be straight-forwardly accomplished using neural nets. For example, binary counting can be accomplished when nodes in the chain represent the digit (that is, $2^0$, $2^1$, . . . $2^k$). For each count of some node ($2^j$) that moves that node from $1$ to $0$, a signal is sent to the next node ($2^{j+1}$) to change its state from $0$ to $1$. Values from each neuron are transmitted in the last step to a final node that aggregates the inputs from the nodes to a single value. While this may seem like a pedantic exercise, simple models like this were critical in elaborating programs (machines) that could be implemented by a general Turing machine.
Building McCulloch and Pitts Neurons
In application, what we refer to as the McCulloch and Pitts neuron (which is simpler than the cases presented by the authors) receives an input vector $\boldsymbol{s}$ of length $k$. Each element in the vector may be a $1$ or a $0$. Each receptor $i$ that receives each $s_i$ is subject to a weight, $w_i$. Thus, adjacent the vector $\boldsymbol{s}$ is the vector $\boldsymbol{w}$. If the sum of these weighted values are greater than the threshold value, $T$, then the gate is opened and the neuron is activated.
For the purpose of learning about neural networks that are part of the modern repertoire, it is helpful to build a very simple neuron, which we can do with python script (I must express my appreciation for Introduction to Neural Network Models of Cognitions, a hands-on text that elaborates theory using examples in python).
The neuron that I build below is conveyed by a plot of a networkx graph. The state of the neuron, $T$, is indicated in the title and using color. If the neuron fires, it is green. If it rests, it is red. If we do not know the state of $T$, its color is grey. Let's start by creating an explanatory visualization. After that, we will create data to simulate a neuron. Instead of elaborating in paragraph form, I have sufficiently noted the script and variables to make this script easy to interpret.