With global fire activity on the rise, many scientists are pushing for the increased availability of satellite monitoring data. The July 2013 Mountain Fire near Idyllwild, California [above] had grown to more than 100 square kilometers when this stereo view was captured.
Some men just want to watch the world burn. But sometimes it’s for a good reason: to keep a record of global wildfire patterns and occurrences so scientists can understand and learn to manage them better.
“We indeed have been observing an upward trend of fire activity in many areas of the globe,” says Ivan Csiszar, a satellite-based fire-detection and -monitoring researcher with the National Oceanic and Atmospheric Administration (NOAA). In the United States, for example, the increase has been particularly pronounced. The average area scorched each year by wildfires in Western states between 1996 and 2008 is 2.4 times as much as what burned between 1984 and 1995, according to research reported last year by Clark University geographers. Right now, the United States is in the middle of a particularly devastating wildfire season, with recent fires in Southern California and the catastrophic Arizona blaze that killed 19 firefighters.
Csiszar is part of the Global Observation of Forest and Land Cover Dynamics (GOFC/GOLD-Fire) project, an alliance of scientists devoted to sharing information on mapping and monitoring specifically to help understand and manage wildfires. GOFC/GOLD-Fire gives response teams and researchers worldwide access to Earth-monitoring satellite data.
“If you really want to look at fires, or any relationship between fires and long-term climate-change processes, you need to be able to consistently monitor the long term,” says Csiszar. And that means getting as much data from as many satellites as possible.
There have been only a few experimental satellites dedicated exclusively to the detection of wildfire, and their temporal and spatial coverage has been limited. Several key environmental satellites, however, are capable of providing data for global long-term wildfire monitoring.
The long-term satellite fire record begins in the 1980s, with the use of the 1.1-kilometer-resolution Advanced Very High Resolution Radiometer (AVHRR) aboard NOAA’s polar satellites. Because of the low saturation of its images, AVHRR doesn’t detect fires very well. So prior to the launch of NASA’s Terra and Aqua satellites (in 1999 and 2002, respectively), the record is not as clear as Csiszar would like. Terra’s and Aqua’s images are much better for fire detection because they both use a Moderate Resolution Imaging Spectroradiometer (MODIS). Able to see in 36 spectral bands, each of which provides resolutions at 1 km, 500 meters, and 250 meters, MODIS provides a pretty accurate picture of wildfire activity on Earth’s surface. The MODIS sensors on Terra and Aqua capture images of the entire planet as the satellites circle Earth on their polar orbits. Together they can observe the same spot four times a day. Within 2 hours of acquisition, their data are made globally available.
Launched in 2011, the Visible Infrared Imaging Radiometer Suite (VIIRS), aboard the Suomi National Polar-orbiting Partnership satellite, is even better than MODIS for fire detection, with its imaging and mapping capabilities and with 22 bands of either 750- or 375-meter resolution. The next-generation JPSS 1 satellite is expected to launch in late 2016 and will also use VIIRS. Polar orbiters aren’t the only satellites involved. A network of geostationary satellites coordinated by GOFC/GOLD-Fire is well suited for collecting images at low latitudes and for spotting daytime fire activity, albeit at a coarser spatial resolution. Fresh data from these satellites are typically available every 30 minutes.
Once computers on the ground have the image data, they can find fires automatically by processing digital-pixel data using detection algorithms. Fire pixels stand out because they produce an increased level of radiance at wavelengths that are characteristic of fire.
Image: NASA Earth ObservatoryBlack Forest burn scar: This false-color image is from NASA’s Terra satellite. The darkest gray and black patches reveal which areas are most severely burned.
When it comes to satellite monitoring, it’s important to get as much high-spatial-resolution data as possible at a high temporal frequency. This can be a problem, as the frequency of images that one polar orbiting satellite can take depends on the orbit of the satellite and the width of the sensor’s view. The narrower the view, the less frequently you’re going to get image updates of the same patch of the planet, making it harder to monitor the entire Earth’s land mass.
This is where international collaboration and the use of constellations of satellites come into play. If one national space agency’s satellite takes images less frequently than that of another agency, researchers may want data from both to get a complete picture.
It’s for this reason that Chris Justice—a member of both the team that developed the fire-detection capabilities for MODIS and VIIRS, and a cochair of GOFC/GOLD-Fire—advocates strongly for national space agencies to offer their data at no cost. “With multiple systems in place, you can increase the temporal frequency of coverage,” Justice says. Ideally, burned areas should be monitored at a moderate resolution—about 30 meters—every two to three days, he adds. The spatial resolutions of VIIRS and MODIS are fine for fire detection, but when mapping areas that are already burned, scientists really need that 30-meter spatial resolution at high temporal frequencies.
But not all space agencies are willing to make their data free to the public, as NASA does. Justice would like the Indian Space Research Organisation and the Centre National d’Etudes Spatiales of France to make their moderate-resolution satellite data free as well. He claims that the cost of their data is an obstacle to their broader use. Neither of these agencies would comment for this article.
Charging for data hinders global fire monitoring, says Justice, citing the lesson NASA learned about charging for Landsat data as an example. Until 2008, Landsat prices ranged from US $200 to about $4 000 per 184- by 184-km scene. The most scenes the agency sold per year was 25 000, according to the Earth Resources Observation and Science (EROS) center. Since 2008, when the images first became free, more than 13 million Landsat scenes have been distributed to 186 countries worldwide.
But not everyone agrees with Justice. “As a science and engineering community, we should want to have the best data provided for the work we do,” says Jim Lynch, forestry director of DMCii, a company that provides satellite data for customers around the world. The argument for charging for data is that without the ability to recoup the cost of building, launching, and operating imaging satellites, fewer of them will be put into orbit, which limits the available data. “In reality, satellite imagery is not expensive. In my opinion, if it is being used for research purposes it will only be a small proportion of the total cost of a study, and therefore it’s important that the best available data is used, ” says Lynch. A growing number of countries, such as Australia and the Netherlands, purchase DMC data to feed into their national data portals and make it available to users free. “It is significantly more cost effective and flexible than running a $1 billion national satellite program,” he says.
But Thomas Holm, chief of the EROS policy and communications office, insists we’re better off now that Landsat data are free. “People are using the data that they need, not what they can afford,” he says. “Because it’s of no cost to users, they’re able to look at long records. You’ve got a 40-year record of the face of the Earth, and you can begin to look at the changes that are occurring.”