Important This document is currently under construction.

Model Overview

The Digital Imaging and Remote Sensing Image Generation(DIRSIG) model has been actively developed at the Digital Imaging and Remote Sensing (DIRS) Laboratory at Rochester Institute of Technology (RIT) for two decades. The model is designed to generate passive broadband, multi-spectral
[John R. Schott, Scott D. Brown, Rolando V. RaqueƱo., Harry N. Gross and Garry Robinson, "An advanced synthetic image generation model and its application to multi/hyperspectral algorithm development", Canadian Journal of Remote Sensing, Vol. 25, No.2, pp. 99-111, June 1999]
, hyper-spectral
[Emmett J. Ientilucci and Scott D. Brown, "Advances in wide area hyperspectral image simulation", Proc. SPIE, Vol. 5075, pp. 110-121, Orlando FL, April 2003]
low-light
[Emmett J. Ientilucci, "Synthetic Simulation and Modeling of Image Intensified CCDs (IICCD)", RIT Imaging Science M.S. thesis, 1998]
, polarized
[James R. Shell II, "Polarimetric Remote Sensing in the Visible to Near Infrared", RIT Imaging Science Ph.D. thesis, 2005]
, active laser radar
[Scott D. Brown, Daniel D. Blevins, and John R. Schott, "Time-gated topographic LIDAR scene simulation", Proc. SPIE, Vol. 5791, Laser Radar Technology and Applications X, pp. 342-353, April 2005]
, and synthetic aperture radar
[Michael Gartley, Adam Goodenough, Scott Brown and Russel P. Kauffman, "A comparison of spatial sampling techniques enabling first principles modeling of a synthetic aperture RADAR imaging platform", Proc. SPIE, Vol. 7699, Algorithms for Synthetic Aperture Radar Imagery XVII, 76990N, April 2010]
datasets through the integration of a suite of first-principles based radiation propagation modules. These object oriented modules address tasks ranging from bi-directional reflectance distribution function (BRDF) predictions of a surface, to time and material dependent surface temperature predictions, to the dynamic viewing geometry of scanning imaging instruments on agile ground, airborne and space-based platforms. In addition to the myriad of DIRSIG specific modules that have been created, there is a suite of interface modules that leverage externally developed atmospheric (e.g. MODTRAN and MonoRTM) and thermodynamic (e.g. MuSES) components that are modeling workhorses for the multi-and hyper-spectral community. The software is employed internally at RIT and externally within the user community as a tool to aid in the evaluation of sensor designs and to produce imagery for algorithm testing purposes. Key components of the model and some aspects of the model’s overall performance have been gauged by several validation efforts over the past decades of the model’s evolution (Mason 1994 and Brown 1996).

Implementation

The DIRSIG software is written entirely in C++, is managed under a revision control system and includes a large suite of unit and integration tests. The model includes a detailed graphical user database and is available for a wide variety of computing platforms including Windows, Mac OSX and Linux. Updates to the software are released 2-4 times a year to a user base of approximately registered 385 users working for the U.S. Government either directly or at supporting contractors. The DIRSIG radiometry engine is very robust and modular. To produce data sets that contain the spatial and spectral complexity of real-world data, the model must be able reproduce a large set of radiative mechanisms that combine to produce the spectral signatures that are collected by real-world imaging instruments. The DIRSIG model attempts to incorporate a wide array of these image-forming processes within one modeling environment. To drive these predictive codes, the model must have access to robust characterizations of the elements to be modeled. For example, input databases describe everything from the chemical description of the atmosphere as a function of altitude to the spectral covariance of a specific material in the scene. The passive modalities trigger a set of radiometry solvers that can account for multi-bounce (multi-scatter) of energy from the sun, moon and sky. For the active modalities, additional sets of algorithms are employed that utilize optimized multi bounce/scatter radiometric bookkeeping techniques (for example, photon mapping (Jenson 2001)) for the active source.

Scene Description

One of the key features of the DIRSIG model is that all modalities are simulated from a common scene description. The synthetic world that is employed is composed of 3-D geometric constructs (polygon geometry, mathematical objects and voxelized geometry) that are assigned a material description (see Figure 1). Users can create 3-D polygon models with a variety of 3-D asset creation tools including AutoCAD, 3ds Max, Rhinoceros, Blender3D, SketchUp, etc. This material description includes thermodynamic properties to enable temperature prediction and optical properties to drive the radiometric prediction. Instead of a discrete suite of modality specific models, the currently supported modalities all share the same geometric and radiometric core, which insures phenomenological agreement across modalities. For example, the same specular paint BRDF for a car hood would be used when simulating both a passive RGB camera and an active LIDAR system. If a vehicle is painted black, it will be warmer than a lighter colored material in the thermal infrared because of the differences in solar absorption. The scene database also supports dynamic object positioning, which allows objects like vehicles to be linked to external traffic simulators (for example, the Simulation of Urban MObility (SUMO) model) or collection scenario planning and management tools including System Toolkit (STK).

Scene Database
Figure 1. DIRSIG employs a single scene database across all modalities.

Instrument Modeling

The DIRSIG sensor module was designed to provide a framework upon which a myriad of multi-modal sensors could be implemented. Passive imaging systems can range in complexity from single-pixel scanning architectures, to modular pushbroom arrays to 2-D imaging arrays. The geometry (pixel sizes and locations) of an imaging array is separated from the "capture" of incident energy by that device. This allows a 2-D array to be combined with either an array wide filter or a color-filter array (for example, a Bayer pattern) to capture multi-spectral imagery. To model a hyper-spectral system, the same 2-D array can be combined with a dispersive or refractive element to create the spatial separation of the incident spectral flux. A LIDAR receiver array analyzes the incident temporal flux to determine when a return is detected. However, a real or synthetic aperture radar system employs an entirely different family of radiation measurement technologies. In these systems, the user is describing an antenna array and the radial gain function.

Platform Modeling

The DIRSIG model also features an advanced data acquisition model that centers on the concept of a "platform" that can have one of more imaging or non-imaging sensors attached to it. The platform model allows sensors to be positioned relative to one another and synchronized by central clocking mechanisms. This platform can represent a variety of data acquisition platforms ranging from a fixed tripod, to a driving van, to a flying aircraft to an orbiting satellite (Brown 2011). The platform model also supports flexible command data interfaces to control the position and orientation of the platform as a function of time (e.g. vehicle route, aircraft flight line, etc.) as well as dynamic platform relative pointing interfaces (scan patterns, camera ball pointing, etc.). The images in Figure 2 and Figure 3 are simulated data products for the same scene, but were collected by separate airborne LIDAR and color image systems. In addition to employing different sensor modalities, these two sensors were imaged from different platforms at different times of day.

MegaScene1 Lidar
Figure 2. A Level-2 (coincidence processed) point cloud product derived from a DIRSIG Level-1 (raw point cloud) simulation of an airborne GmAPD Lidar system over a residential site.
MegaScene1 RGB
Figure 3. A passive RGB image of the same residential site.

Technical Readiness Levels

The DIRSIG model can model a variety of different imaging modalities and application areas. These different focus areas have been independently developed over many years and with different levels of focus or resources. In order to help the end-user gauge the technical readiness of the model for a given application, we have developed a qualitative scale to rank different aspects of the model. These levels track the life cycle of a new modality or research application are from conception to maturity. The largest driver in how quickly something progresses through the lifecycle is community interest and resources. Some new model features are conceived and incubated during a period of intense interest by the community, which then wavers. In those cases, a topic might be abandoned until interest and funding resources are renewed.

Mature

This level represents the highest level of readiness on the scale. That does not mean that there is no room for improvement, but it does indicate that the application area has been well exercised by the team at RIT and the user community as a whole. Documentation and examples for this level are usually abundant.

Operational

This level corresponds to features that are becoming finalized. Documentation and demonstrations (examples) at this level are well established but (perhaps) not complete. End users should utilize features at this level with caution and expect minor bugs and well documented limitations.

Experimental

This level represents a capability that has achieved the most basic form of usability. Documentation for capabilities at this level are usually non-existent or ad-hoc. Interfaces to the model may change as the capability is refined and matured. End users should utilize features at this level with extreme caution and expect bugs and undocumented limitations.

Exploratory

This level is associated with new research areas that are being researched by the team at RIT, but which is not currently available to the end-user in any form. Some research in this area may never make it out of this state.