Hyperspectral imaging (HSI) involves collecting and processing information to generate a spatial map of spectral variation. It records tens or hundreds of images from closely spaced wavelengths, or spectral bands, as opposed to just red, green, and blue, as in standard color imaging. It plays an important role in remote sensing tasks such as land-cover classification, and target and anomaly detection. However, traditional hyperspectral cameras, or sensors, are expensive, data-heavy, and slow. They build what are known as 3D hyperspectral data cubes out of information collected as a set of 2D images that are often very similar to each other. The time taken to collect this data makes it a costly process. It also leads to image distortion due to motion from the platform, such as a satellite or airplane, or in-scene objects, for example, moving vehicles.
Recent advances in sensor design inspired by compressive sensing (CS) provide new ways of capturing hyperspectral images1 using fewer snapshots and measurements than traditional sensors.2, 3 Built on ideas from CS and the more general field of computational imaging, a new device called the Coded Aperture Snapshot Spectral Imager-Dual Disperser (CASSI-DD) captures an approximation of the full hyperspectral scene with a single snapshot.4 It disperses incoming light with a prism, blocks 50% of the light with a coded aperture, and realigns the spectrum with a second prism. A photodetector then measures the resulting multiplexed spatial and spectral information of the entire scene. An approximate reconstruction of the hyperspectral cube using CS measurements is generated once the camera system associated with the sensing model is inverted by applying an iterative method, or algorithm. Taking additional measurements with different coded apertures improves the cube’s approximation.
This style of snapshot imaging offers significant advantages for remote sensing, including faster sensing and more economical and efficient use of information. It also allows a more flexible sensing method, since it produces coarse approximations of the entire scene with the first few snapshots. These approximations are used to determine whether further snapshots are needed for additional refinement. However, CASSI-DD—and CS sensors in general—have only been demonstrated on close-up scenes with large-scale features. Remotely sensed images are typically far more complex, and these new devices may not generate highly redundant, or compressible, images for successful CS. This article summarizes our recent research focusing on understanding how effectively the CASSI-DD collects data remotely.
To investigate the accuracy of snapshot imaging in remotely sensed HSI, we applied the CASSI-DD sensing model to hyperspectral cubes collected using traditional sensors. This was to demonstrate how simulated CS images change according to the number of measurements taken, and the number of errors and effects on individual spectra. Constructing a 3D hyperspectral cube from the 2D CASSI-DD snapshots requires solving a system of linear equations that has infinitely many solutions. We used an algorithm that picks the solution minimizing a function that measures continuity of the spectrum and the length of the edges of shapes in the hyperspectral image.5 A downside of this algorithm is that it can blur small-scale spatial and spectral features.
We applied the CASSI-DD sensing model to three publicly available hyperspectral cubes: AVIRIS Cuprite,6 HYDICE Urban,7 and HyMap CookeCity. We measured how the number of snapshots taken, nt, affects reconstruction accuracy, r, target detection, and classification. Figure 1 shows reconstruction error as a function of
Individual reconstruction bands, or images taken within a narrow wavelength of light, are almost indistinguishable visually from the original bands, while spectral signatures indicate each model’s performance. Figure 2(a) and (b) shows representative spectra from the AVIRIS Cuprite and HYDICE Urban scenes. Note that CASSI-DD preserves the general shape of the spectra, even when collecting only measurements required by traditional sensors.
Principal component analysis reveals that CASSI-DD images alter the position of the data cloud when r is small: see Figure 4(a) and (b). This alteration can move target-like pixels further away from the target in the data cloud. As a result, subpixel target detection suffers unless the CS image is derived from of the physical measurements taken with a traditional sensor.8 These results depend on the reconstruction algorithm we used, and future reconstruction methods tailored to specific tasks might improve them.9
It is interesting to note there is no significant loss of accuracy for land classification and the normalized difference vegetation index (NDVI) maps. Figure 3 shows that binary vegetation maps created by NDVI are sufficiently accurate even with low percentages of the physical data measured.
Sensors such as CASSI-DD are potentially robust, cheap hyperspectral cameras. They capture low-resolution approximations of hyperspectral cubes with a single snapshot, and adjust the amount of data collected based on the results of a single snapshot. This makes the sensors ideal for rapid, area-wide imaging because they quickly pass through areas where a single snapshot is sufficient, and linger to collect more snapshots in areas requiring more detail.
Our future work will focus on designing a new class of reconstruction algorithms that use learned local spatial and spectral dictionaries, or collections of signal components. Such algorithms might increase the quality of hyperspectral reconstructions at the cost of requiring additional prior information.
National Geospatial-Intelligence Agency
John Greer received his PhD in mathematics from Duke University.
Justin Christopher Flake
Justin Flake received his PhD in mathematics from the University of Maryland.
Maria Busuioceanu has an MS in imaging science.
David W. Messinger
Rochester Institute of Technology
David Messinger received his PhD in physics from Rensselaer Polytechnic Institute.
2. D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52(4), p. 1289-1306, 2006.
3. E. J. Candes, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52(2), p. 489-509, 2006.
1. D. Brady, Optical Imaging and Spectroscopy, Wiley-OSA, New York, 2009.4. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, T. J. Schulz, Single-shot compressive spectral imaging with a dual-disperser architecture, Opt. Express 15(21), p. 14013-14025, 2007.5. J. B. Greer, J. C. Flake, Accurate reconstruction of hyperspectral images from compressive sensing measurements, Proc. SPIE 8717, p. 87170E, 2013. doi:10.1117/12.20151486. http://aviris.jpl.nasa.gov Website for the Jet Propulsion Laboratory. Accessed 24 July 2013.7. http://tec.army.mil/Hypercube/ Website for the Army Geospatial Center, US Army Corps of Engineers. Accessed 24 July 2013. Figure 1. Reconstruction error as a function of information content ratio. L2: Relative euclidean error (black axis). L∞: Relative maximum error (blue axis). r: Ratio of the number of snapshots to the number of image bands. DD: Dual disperser.
Figure 2. (a) Reconstruction of the AVIRIS Cuprite spectra at and . (b) Reconstruction of the HYDICE Urban spectra at and .
Figure 4. Principal component analysis results showing (a) the number of eigenvectors needed to represent 99% of the variance in the data and (b) the spectral angle between the input and reconstructed first eigenvector.
8. M. Busuioceanu, D. W. Messinger, J. B. Greer, J. C. Flake, Evaluation of the CASSI-DD hyperspectral compressive sensing imaging system, Proc. SPIE 8743, p. 87431V, 2013. doi:10.1117/12.20154459. J. M. Duarte-Carvajalino, G. Sapiro, Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization, IEEE Trans. Image Process. 18(7), p. 1395-1408, 2009. Figure 3. Histogram of a normalized difference vegetation index (NDVI) map calculated from the reconstruction of the input hyperspectral cube. nt: Number of snapshots.