Future Rovers May Drive Themselves on Other Planets

5 July 2012 Michael Schirber It’s a hot summer day, and your eyes spot an ice cream cart up ahead. Without even really thinking, you start walking that direction. Planetary scientists would like to give robots that kind of visual…

5 July 2012

Michael Schirber

It’s a hot summer day, and your eyes spot an ice cream cart up ahead. Without even really thinking, you start walking that direction. Planetary scientists would like to give robots that kind of visual recognition—not for getting ice cream, but for finding scientifically interesting targets.

Currently, rovers and other space vehicles are still largely dependent on commands from their human controllers back on Earth. But to decide what commands to send, operators must wait to receive images and other pertinent information from the spacecraft. Because rovers don’t have powerful antennas, this so-called downlink usually takes a lot of time.

The data bottleneck means rovers often “twiddle their thumbs” between subsequent commands.

Mom Makes $77/hr Online Exposed
Montreal: Is it a scam? We investigated and found out how she does it!
JobsCareerOnline.com

A concept image for the ExoMars rover that is being developed for a 2018 mission to Mars.
CREDIT: ESA
View full size image
It’s a hot summer day, and your eyes spot an ice cream cart up ahead. Without even really thinking, you start walking that direction. Planetary scientists would like to give robots that kind of visual recognition—not for getting ice cream, but for finding scientifically interesting targets.

Currently, rovers and other space vehicles are still largely dependent on commands from their human controllers back on Earth. But to decide what commands to send, operators must wait to receive images and other pertinent information from the spacecraft. Because rovers don’t have powerful antennas, this so-called downlink usually takes a lot of time.

The data bottleneck means rovers often “twiddle their thumbs” between subsequent commands.

Brain Training Games www.lumosity.comImprove memory with scientifically designed brain exercises.Urthecast – Earth’s Video www.urthecast.comLaunching world’s first HD video streaming platform of the EarthHow To Sing – Really Sing www.TheSingingZone.comBreakthrough Method Releases Your Unique Voice! Watch VideoAds by Google
“Our goal is to make smart instruments that can do more within each command cycle,” says David Thompson of the Jet Propulsion Laboratory in Pasadena, Calif.

Thompson is heading a project called TextureCam, which involves creating a computer vision package that can map a surface by identifying geological features. It is primarily envisioned for a rover, but it could also benefit a spacecraft visiting an asteroid or an aerobot hovering in the atmosphere of a distant world. [Curiosity – The SUV of Mars Rovers]

With funds from NASA’s Astrobiology Science and Technology for Exploring Planets (ASTEP), Thompson’s team is currently refining their computer algorithm, with an eventual plan to build a prototype instrument that can map an astrobiologically-relevant field site.

Digsby
IM, Email, and Social Networks in one easy to use application!
http://digsby.com
A TextureCam analysis of a Mars image is able to distinguish rocks from soil.
CREDIT: NASA/JPL/Caltech/Cornell
View full size image
Roam rover, roam rover

Rovers have already made great advances in autonomy . Current prototypes can travel as much as a kilometer on their own using on-board navigation software. This allows these vehicles to cover a much larger territory.

But one concern is that a rover may literally drive over a potentially valuable piece of scientific real estate and not even realize it. Giving a rover some rudimentary visual identification capabilities could help avoid missing “the needle in the haystack,” as Thompson refers to the hidden clues that astrobiologists hope to uncover on other planets.

“If the rover can make simple distinctions, we can speed up the reconnaissance,” he says. As it drives along, the rover could snap several images and use on-board software to prioritize which images to downlink to Earth.

And while waiting for its next set of commands, it could pick a potentially interesting geological feature and then drive up close to take a detailed picture or even perform some simple chemical analysis.

“You could start the next day with the instrument sitting in front of a prime location,” Thompson says.

Instead of spending time trying to get the rover from point A to point B, mission controllers could concentrate on doing the higher level scientific investigation that the rover can’t do. At least, not yet. [NASA’s Mars Rover Curiosity: 11 Amazing Facts]

“The field being investigated by David Thomson is vital to cope with the flood of remote sensing data returned from spacecraft,” says Anthony Cook of Aberystwyth University in the UK, who is not involved with TextureCam.

There are a other projects working on computer vision for rovers. In 2010, the Mars rover Opportunity received a software upgrade called AEGIS that can identify scientifically interesting rocks. A project in the Atacama desert in Chile used a similar rock detector system on its rover called Zoë. And ESA’s ExoMars mission is developing computer vision that can detect objects in the rover’s vicinity.

TextureCam is unique from these other efforts in that it is mapping the surface, rather than trying to isolate particular objects. It’s a more general strategy that can identify terrain characteristics, such as weathering or fracturing.

Recognizing a rock face

The new approach by Thompson’s group focuses on the “texture” of an image, which is computer vision terminology for the statistical patterns that exist in an array of pixels. The same kind of image analysis is being used in more common day-to-day applications.

For example, the web is inundated with huge photo archives that haven’t been sorted in any systematic way. Several companies are developing “search engines” that can identify objects in digital images. If you were looking for, say, an image with a “blue dog” or a “telephone booth,” these programs could sift through a collection of photos to find those that match the particular criteria.

Additionally, many digital cameras detect faces in the camera frame and automatically adjust the focus depending on how far away the faces are. And some new video game consoles have sensors to detect the bodily pose of a game player.

What all these technologies have in common is a sophisticated analysis of image pixels. The relevant software programs typically look for signals in the variations of brightness or the shades of color that are characteristic of a telephone or a face or a rock.

These signals often have little to do with the way we might describe these objects.

“The software identifies statistical properties that might not be obvious to the human eye,” Thompson says.

HomeNewsSpaceflightScience & AstronomySearch for LifeSkywatchingTech & RobotsTopicsImagesVideoEntertainmentShop

11 Amazing Things NASA’s Huge Mars Rover Can Do

NASA’s 10 Greatest Science Missions

Build Your Own Borg: Sort of
Future Planetary Rovers May Make Their Own Decisions
Michael Schirber, Astrobiology MagazineDate: 05 July 2012 Time: 06:54 AM ET
FOLLOW US
SHARE

Mom Makes $77/hr Online Exposed
Montreal: Is it a scam? We investigated and found out how she does it!
JobsCareerOnline.com

A concept image for the ExoMars rover that is being developed for a 2018 mission to Mars.
CREDIT: ESA
View full size image
It’s a hot summer day, and your eyes spot an ice cream cart up ahead. Without even really thinking, you start walking that direction. Planetary scientists would like to give robots that kind of visual recognition—not for getting ice cream, but for finding scientifically interesting targets.

Currently, rovers and other space vehicles are still largely dependent on commands from their human controllers back on Earth. But to decide what commands to send, operators must wait to receive images and other pertinent information from the spacecraft. Because rovers don’t have powerful antennas, this so-called downlink usually takes a lot of time.

The data bottleneck means rovers often “twiddle their thumbs” between subsequent commands.

Brain Training Games www.lumosity.comImprove memory with scientifically designed brain exercises.Urthecast – Earth’s Video www.urthecast.comLaunching world’s first HD video streaming platform of the EarthHow To Sing – Really Sing www.TheSingingZone.comBreakthrough Method Releases Your Unique Voice! Watch VideoAds by Google
“Our goal is to make smart instruments that can do more within each command cycle,” says David Thompson of the Jet Propulsion Laboratory in Pasadena, Calif.

Thompson is heading a project called TextureCam, which involves creating a computer vision package that can map a surface by identifying geological features. It is primarily envisioned for a rover, but it could also benefit a spacecraft visiting an asteroid or an aerobot hovering in the atmosphere of a distant world. [Curiosity – The SUV of Mars Rovers]

With funds from NASA’s Astrobiology Science and Technology for Exploring Planets (ASTEP), Thompson’s team is currently refining their computer algorithm, with an eventual plan to build a prototype instrument that can map an astrobiologically-relevant field site.

Digsby
IM, Email, and Social Networks in one easy to use application!
http://digsby.com
A TextureCam analysis of a Mars image is able to distinguish rocks from soil.
CREDIT: NASA/JPL/Caltech/Cornell
View full size image
Roam rover, roam rover

Rovers have already made great advances in autonomy . Current prototypes can travel as much as a kilometer on their own using on-board navigation software. This allows these vehicles to cover a much larger territory.

But one concern is that a rover may literally drive over a potentially valuable piece of scientific real estate and not even realize it. Giving a rover some rudimentary visual identification capabilities could help avoid missing “the needle in the haystack,” as Thompson refers to the hidden clues that astrobiologists hope to uncover on other planets.

“If the rover can make simple distinctions, we can speed up the reconnaissance,” he says. As it drives along, the rover could snap several images and use on-board software to prioritize which images to downlink to Earth.

And while waiting for its next set of commands, it could pick a potentially interesting geological feature and then drive up close to take a detailed picture or even perform some simple chemical analysis.

“You could start the next day with the instrument sitting in front of a prime location,” Thompson says.

Instead of spending time trying to get the rover from point A to point B, mission controllers could concentrate on doing the higher level scientific investigation that the rover can’t do. At least, not yet. [NASA’s Mars Rover Curiosity: 11 Amazing Facts]

“The field being investigated by David Thomson is vital to cope with the flood of remote sensing data returned from spacecraft,” says Anthony Cook of Aberystwyth University in the UK, who is not involved with TextureCam.

There are a other projects working on computer vision for rovers. In 2010, the Mars rover Opportunity received a software upgrade called AEGIS that can identify scientifically interesting rocks. A project in the Atacama desert in Chile used a similar rock detector system on its rover called Zoë. And ESA’s ExoMars mission is developing computer vision that can detect objects in the rover’s vicinity.

TextureCam is unique from these other efforts in that it is mapping the surface, rather than trying to isolate particular objects. It’s a more general strategy that can identify terrain characteristics, such as weathering or fracturing.

Recognizing a rock face

The new approach by Thompson’s group focuses on the “texture” of an image, which is computer vision terminology for the statistical patterns that exist in an array of pixels. The same kind of image analysis is being used in more common day-to-day applications.

For example, the web is inundated with huge photo archives that haven’t been sorted in any systematic way. Several companies are developing “search engines” that can identify objects in digital images. If you were looking for, say, an image with a “blue dog” or a “telephone booth,” these programs could sift through a collection of photos to find those that match the particular criteria.

Additionally, many digital cameras detect faces in the camera frame and automatically adjust the focus depending on how far away the faces are. And some new video game consoles have sensors to detect the bodily pose of a game player.

What all these technologies have in common is a sophisticated analysis of image pixels. The relevant software programs typically look for signals in the variations of brightness or the shades of color that are characteristic of a telephone or a face or a rock.

These signals often have little to do with the way we might describe these objects.

“The software identifies statistical properties that might not be obvious to the human eye,” Thompson says.

Space.com