The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is a physics-driven data generation package. It is used to simulate electro-optical and infrared (EO/IR) data (primarily images) collected by ground, airborne and space-based imaging systems.
The primary goal of the model is to support science and engineering trade studies for remote sensing systems.
DIRSIG models how user-defined virtual sensors collect data in a user-created 3D world. That world is defined by 3D geometry using common facet and volumetric representations. The materials applied to the scene geometry are spectral with coverage across the entire EO/IR spectrum.
The model uses an arbitrary bounce, path tracing numerical radiometry approach for light propagation.
The simulated collection systems can produce (but are not limited to) the following types of data products:
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model has been actively developed at the Digital Imaging and Remote Sensing (DIRS) Laboratory at Rochester Institute of Technology (RIT) for three decades.
The software has been freely available to the user community since 1999 as long as that user has attended a DIRSIG Basic Training session.
The RAMI phases have all focused on benchmarking "models designed to simulate the transfer of radiation at or near the Earth's terrestrial surface, i.e., in plant canopies and over soil surfaces."
RAMI simulations typically consist of radiance scans across portions of the hemisphere for "abstract" (e.g., statistically distributed) and "actual" vegetation canopies. All participating models use the same sets of inputs for each defined problem and then submit their results.
Scenes are generally created for specific projects and, hence, their extent and spatial resolution are driven by requirements for those projects.
The "Alpine Scene" project is an internal project to explore and optimize methods for building large area scenes that are eventually 100s of km across. This prototype scene is not a real-world location, but is inspired by Mt Hood. The initial iteration of the scene is 40 km x 40 km and contains 10,000,000 conifer trees.
DIRSIG leverages the physics-driven MODTRAN™ model developed by Spectral Sciences, Inc.(SSI) for atmospheric radiative transfer (direct solar and diffuse sky illumination, path scattering, path emission, path transmission, etc.). DIRSIG pre-builds databases unique to each simulation that incorporates geolocation, day of year, time of day and the MODTRAN description of the atmosphere (aerosols, visibility, etc.).
DIRSIG supports refraction along paths in the atmosphere and can be directly coupled to the temperature, pressure and water vapor profiles utilized in MODTRAN. Below are simulations of a very long (20 km) slant path view of a 1 x 1 meter USAF bar target. The mean path refraction is a few degrees and the wavelength dependent refraction (angular dispersion from the mean path) between the RGB channels is around 1 microradian.
DIRSIG has a pair of plugins that leverage the industry standard OpenVDB format for storing volumetric data such as clouds and plumes. The plugins support data-driven motion and temporal evolution of these volumes.
Volumetric optical properties support descriptions of the spectral extinction, absorption and/or scattering.
The same path tracing radiometric solution is used for volumes that is used for traditional 3D scene geometry. The paths through these volumes might employ 10s of "bounces".
The user can operate an array without temporal integration (output is instantaneous radiance) or with temporal integration using either a global or rolling shutter.
The model supports bi-directional propagation of the transmitter beam, time of flight tracking along all paths and user-defined receiver detection. The user can use the existing platform model to incorporate various scan patterns.
You can learn more about this modality in the LIDAR Modality Handbook.
ChipMaker was originally designed to support algorithm users rather than sensor engineers. Machine Learning (ML) algorithms have been (generally) trained with higher level processed data (L2+) where many sensor characterstics have been compensated for or corrected in some way. Furthermore, most algorithm users are not aware of engineering level details of the sensor they are interested in. Hence, the sensor modeled in ChipMaker is simplified and configured with higher level descriptors:
ChipMaker is an evolving capability and as ML usage and training changes, the tool and workflows will evolve as well.
The DIRSIG4 model supports modeling Synthetic Aperture Radar (SAR) systems and outputs the complex phase history that must be externally processed to focus into a traditional SAR image product.
The DIRSIG4 model supports modeling ground-to-space and space-to-space collection scenarios in support of Space Domain Awareness (SDA) or Space Situational Awareness (SSA) missions.