George Dyson has a contribution in the answers to the Edge question (here) writing about analog computers.
Imagine you need to find the midpoint of a stick. You can measure its length, using a ruler (or making a ruler, using any available increment) and digitally compute the midpoint. Or, you can use a piece of string as an analog computer, matching the length of the stick to the string, and then finding the middle of the string by doubling it back upon itself. This will correspond, without any loss of accuracy due to rounding off to the nearest increment, to the midpoint of the stick. If you are willing to assume that mass scales linearly with length, you can use the stick itself as an analog computer, finding its midpoint by balancing it against the Earth’s gravitational field.
So far so good a nice example.
There is no precise distinction between analog and digital computing, but, in general, digital computing deals with integers, binary sequences, and time that is idealized into discrete increments, while analog computing deals with real numbers and continuous variables, including time as it appears to exist in the real world. The past sixty years have brought such advances in digital computing that it may seem anachronistic to view analog computing as an important scientific concept, but, more than ever, it is.
Here I have to differ a bit. I think there is a very precise distinction. Everything about a digital system is discrete and discontinuous the strings of digits that make up a number (whether binary or not) are not infinitely long and therefore are discontinuous like the time, marked out in the ticks of a clock. On the other hand, everything in an analog system is continuous physical quantities like voltage and real or scaled real time. An analog computer is a physical model of a system in which the starting conditions can be set and then the behaviour of the model over time can be followed. Digital computers are not physical models but mathematical/logical ‘models’. Of course there can be hybrids, analog computers with some digital components as elements of the model or digital computers with analog components that are sampled. Dyson goes on to discuss interesting analog aspects of social networks, for example, the Facebook network and its activity as a model of a social web.
But my interest is in the brain and I see it as an analog system. We were misled by the seemingly digital nature of the firing spikes of some neurons, but now that we are aware of firing rates, synapses, electromagnetic fields and so on, it is plain that the brain is a physical organ using continuous not discrete quantities. It is quite literally a physical system that models the world and the self-organism in that world. I think the brain is not digital, does not use digital type commands, addresses, clock ticks and so on and therefore does not use what is ordinarily meant by software algorithms. We can use the digital computer as a metaphor but only to a limited extent and an analog computer metaphor would be somewhat more realistic, although at some point the computer metaphor is likely to break down (as all metaphors eventually break).
Now, I am well aware that most people working in Artificial Intelligence are likely to disagree with my stance. The argument for an artificial brain goes thus: brains do cognition cognition is computing all computers are equivalent to the universal Turing Machine therefore a conventional type of computer (von Neumann) can emulate a brain except if the brain is analog and then there needs to be an approximation (but it will still be good enough) however it may take more resources then are available (but that is a different problem). Each statement has some degree of inaccuracy unless the terms are very carefully defined. Here is part of the Whole Brain Emulation a Roadmap (pdf):
Whole brain emulation, often informally called uploading or downloading, has been the subject of much science fiction and also some preliminary studies. The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.
So this is the general idea. How is the analog problem dealt with?
A surprisingly common doubt expressed about the possibility of simulating even simple neural systems is that they are analog rather than digital. The doubt is based on the assumption that there is an important qualitative difference between continuous and discrete variables. If computations in the brain make use of the full power of continuous variables the brain may essentially be able to achieve hypercomputation, enabling it to calculate things an ordinary Turing machine cannot. However, brains are made of imperfect structures which are, in turn, made of discrete atoms obeying quantum mechanical rules forcing them into discrete energy states, possibly also limited by a space‐time that is discrete on the Planck scale (as well as noise, see below) and so it is unlikely that the high precision required of hypercomputation can be physically realized. Even if hypercomputation were physically possible, it would by no means be certain that it is used in the brain, and it might even be difficult to detect if it were (the continual and otherwise hard to explain failure of WBE would be some evidence in this direction). However, finding clear examples of non‐Turing computable abilities of the mind would be a way of ruling out Turing emulation.
A discrete approximation of an analog system can be made arbitrarily exact by refining the resolution. If an M bit value is used to represent a continuous signal, the signal‐to‐noise ratio is approximately 20 log10(2M) dB (assuming uniform distribution of discretization errors, which is likely for large M). This can relatively easily be made smaller than the natural noise sources such as unreliable synapses, thermal, or electrical noise. The thermal noise is on the order of 4.2∙10‐21 J, which suggests that energy differences smaller than this can be ignored unless they occur in isolated subsystems or on timescales fast enough to not thermalize. Field potential recordings commonly have fluctuations on the order of millivolts due to neuron firing and a background noise on the order of tens of microvolts. Again this suggests a limit to the necessary precision of simulation variables.
I would be the last person to say that this will definitely not work; I am just not willing to bet a penny on success. There is that old saying that what is not understood seems easy this is difficult and more so because of our lack of understanding. I note that the Roadmap has a curious paragraph mentioning understanding (underlining added):
The interplay between biological realism (attempting to be faithful to biology), completeness (using all available empirical data about the system), tractability (the possibility of quantitative or qualitative simulation) and understanding (producing a compressed representation of the salient aspects of the system in the mind of the experimenter) will often determine what kind of model is used. The appropriate level of abstraction and method of implementation depends on the particular goal of the model. In the case of WBE, the success criteria discussed below place little emphasis on under-standing, but much emphasis on qualitatively correct dynamics, requiring much biological realism (up to a point, set by scale separation) and the need for data‐driven models. Whether such models for whole brain systems are tractable from a modelling and simulation standpoint is the crucial issue.
Over time and at great expense, we will create mock-ups of various areas and functions in the brain in the search to understand it. They will be worth the effort and expense. And when we have a fairly good understanding of how the whole brain works, we will probably choose not to put the resources into a whole and excellent emulation and instead use the resources for tasks that computers do better than brains.