Friston’s Law


The New Scientist magazine had an article by G Huang about Friston’s theory,

is this a unified theory of the brain.

“…The brain is much messier than a physical system. It is the product of half a billion years of evolution. It performs myriad functions – reasoning, memory, perception, learning, attention and emotion to name just a few – and uses a staggering number of different types of cells, connections and receptors. So it does not lend itself to being easily described by simple mathematical laws. That hasn’t stopped researchers in the growing field of computational neuroscience from trying. In recent years, they have sought to develop unifying ideas about how the brain processes information so that they can apply them to the design of intelligent machines.

Until now none of their ideas has been general or testable enough to arouse much excitement in straight neuroscience. But a group from University College London may have broken the deadlock. Neuroscientist Karl Friston and his colleagues have proposed a mathematical law that some are claiming is the nearest thing yet to a grand unified theory of the brain. From this single law, Friston’s group claims to be able to explain almost everything about our grey matter…

Friston’s ideas build on an existing theory known as the “Bayesian brain”, which conceptualises the brain as a probability machine that constantly makes predictions about the world and then updates them based on what it senses…”The brain is an inferential agent, optimising its models of what’s going on at this moment and in the future,” says Friston. In other words, the brain runs on Bayesian probability…

This is where Friston’s work comes in. In the 1990s he was working next door to Hinton. At that time Hinton was beginning to explore the concept of “free energy” as it applies to artificial neural networks. Free energy originates from thermodynamics and statistical mechanics, where it is defined as the amount of useful work that can be extracted from a system, such as a steam engine. It is roughly equivalent to the difference between the total energy in the system and its “useless energy”, or entropy.

Hinton realised that free energy was mathematically equivalent to a problem he was familiar with: the difference between the predictions made by an artificial neural network and what it actually senses. He showed that you could solve some tough problems in machine learning by treating this “prediction error” as free energy, and then minimising it…

Friston developed the free-energy principle to explain perception, but he now thinks it can be generalised to other kinds of brain processes as well. He claims that everything the brain does is designed to minimise free energy or prediction error. “In short, everything that can change in the brain will change to suppress prediction errors, from the firing of neurons to the wiring between them, and from the movements of our eyes to the choices we make in daily life,” he says.”

 

2 thoughts on “Friston’s Law

  1. This is not correct. What is minimized is not “prediction errror” but “error that leads to non-evolutionary advantageous prediction errors”.

    When you are hungry, you (nearly always) choose something tha is larger to eat than what you “need”. Having perception biased to choose more food instead of less is an evolutionary advantage and so is (nearly always) the default choice.

    All of physiology uses “feed forward” kind of control. Glucose is put into blood so it will be there when the blood passes through the tissue compartment that needs it. Physiology trades reduced efficiency for speed.

Leave a Reply

Your email address will not be published. Required fields are marked *