By using decades’ worth of data from human motion perception studies, researchers have trained an artificial neural network to estimate the speed and direction of image sequences.
The new system, called MotionNet, is designed to closely match the motion-processing structures inside a human brain. This has allowed the researchers to explore features of human visual processing that cannot be directly measured in the brain.
Their study, published today in the Journal of Vision, uses the artificial system to describe how space and time information is combined in our brain to produce our perceptions, or misperceptions, of moving images.
The brain can be easily fooled. For instance, if there’s a black spot on the left of a screen, which fades while a black spot appears on the right, we will ‘see’ the spot moving from left to right – this is called ‘phi’ motion. But if the spot that appears on the right is white on a dark background, we ‘see’ the spot moving from right to left, in what is known as ‘reverse-phi’ motion.”
The researchers reproduced reverse-phi motion in the MotionNet system, and found that it made the same mistakes in perception as a human brain – but unlike with a human brain, they could look closely at the artificial system to see why this was happening. They found that neurons are ‘tuned’ to the direction of movement, and in MotionNet, ‘reverse-phi’ was triggering neurons tuned to the direction opposite to the actual movement.
The artificial system also revealed new information about this common illusion: the speed of reverse-phi motion is affected by how far apart the dots are, in the reverse to what would be expected. Dots ‘moving’ at a constant speed appear to move faster if spaced a short distance apart, and more slowly if spaced a longer distance apart.
“We’ve known about reverse-phi motion for a long time, but the new model generated a completely new prediction about how we experience it, which no-one has ever looked at or tested before,” said Dr Reuben Rideaux, a researcher in the University of Cambridge’s Department of Psychology and first author of the study.
Humans are reasonably good at working out the speed and direction of a moving object just by looking at it. It’s how we can catch a ball, estimate depth, or decide if it’s safe to cross the road. We do this by processing the changing patterns of light into a perception of motion – but many aspects of how this happens are still not understood.
“It’s very hard to directly measure what’s going on inside the human brain when we perceive motion - even our best medical technology can’t show us the entire system at work. With MotionNet we have complete access,” said Rideaux.
Thinking things are moving at a different speed than they really are can sometimes have catastrophic consequences. For example, people tend to underestimate how fast they are driving in foggy conditions, because dimmer scenery appears to be moving past more slowly than it really is. The researchers showed in a previous study that neurons in our brain are biased towards slow speeds, so when visibility is low they tend to guess that objects are moving more slowly than they actually are.
Revealing more about the reverse-phi illusion is just one example of the way that MotionNet is providing new insights into how we perceive motion. With confidence that the artificial system is solving visual problems in a very similar way to human brains, the researchers hope to fill in many gaps in current understanding of how this part of our brain works.
Predictions from MotionNet will need to be validated in biological experiments, but the researchers say that knowing which part of the brain to focus on will save a lot of time.
Rideaux and his study co-author Dr Andrew Welchman are part of Cambridge’s Adaptive Brain Lab, where a team of researchers is examining the brain mechanisms underlying our ability to perceive the structure of the world around us.
This research was supported by the Leverhulme Trust and the Isaac Newton Trust.
Reference
Rideaux, R. & Welchman, A.E.: ‘Exploring and explaining properties of motion processing in biological brains using a neural network.’ Journal of Vision, Feb 2021. DOI: 10.1167/jov.21.2.11
A computer network closely modelled on part of the human brain is enabling new insights into the way our brains process moving images - and explains some perplexing optical illusions.
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.