Micro-robot eyes inspired by jumping spider
Jumping spiders hunt and pounce on their prey, rather than using webs. But their eyes are so tiny that researchers wondered how they could provide information about distance, i.e. depth perceptions.
The eyes have four layers of light-sensitive cells. That means that light focused on one layer will be out of focus (blurry) on the others. So what is the point of having a blurry image?
Out-of-focus: design feature for depth perception
Far from a design flaw, this enables spiders to judge distances accurately using a phenomenon called depth from defocus.1To explain how this works, think about our own eyes. Our two eyes produce slightly different images, so our brain works out depth from this. But what if we had only one working eye? How could we tell distance?
Well, suppose that this one-eyed person also needs glasses some of the time because he is either far-sighted (or long-sighted—hyperopia) or near-sighted (or short-sighted—myopia).
In either case, this visual defect can provide a way for a one-eyed person to have some idea of distance. E.g. a far-sighted person would realize that a blurry object must be closer, and a short-sighted person would realize that a blurry object is far away, otherwise the objects in each case would be clear.
But in the jumping spider, the out-of-focus images are not a defect that happens to have an advantage,2 but an ingenious design. And now human engineers have copied the concept.
Biomimetic robot eyes
These spider eyes have inspired a team from researchers at Harvard’s School of Engineering and Applied Sciences (SEAS) to make a very small depth sensor. This could be used in micro-robots and virtual-reality headsets.
But instead of using multiple layers, they used a metalens, also pioneered by researchers in SEAS. This is a flat surface that uses nano-structures to focus light, and can be made much thinner than conventional compound lenses doing the same job. In this case, the meta-lens split the light into two adjacent images of different blurriness. They then use an ultra-efficient algorithm to analyse both images and generate a depth map.3
Evolution did it?
As we often see in biomimetic papers, there is a fact-free homage to evolution:
Evolution has produced vision systems that are highly specialized and efficient, delivering depth-perception capabilities that often surpass those of existing artificial depth sensors.4
Of course, there is no explanation of how such a system could evolve. And since it took much ingenuity to make a copy, how much more ingenious is the One who made the original? And note how they need more than just the physical structure. Also required was a highly sophisticated program that can operate in the micro-computer of the robot’s brain!
References and notes
- Sarfati, J., Spiders’ ‘out-of-focus’ eyes help them see depth, Creation 34(3):7, 2012; 3-D Vision for tiny eyes, news.sciencemag.org, 26 Jan 2012. Return to text.
- See Wieland, C. Beetle bloopers, Creation 19(3):30 (June–August 1997); creation.com/beetle. Return to text.
- Nature-inspired metalens sensor for use in microrobotics and wearable devices, azosensors.com, 30 Oct 2019. Return to text.
- Qi Guo and 6 others, Compact single-shot metalens depth sensors inspired by eyes of jumping spiders, PNAS 116(46) 22959–2296, 12 Nov 2019. Return to text.