Keywords: instance, motion
Summary
This scenario includes a sensor configuration to approximate the response and commonly observed artifacts of a microbolometer camera system.
Details
The focus of this demo is the use of the built-in detector model in the BasicPlatform sensor plugin. Specificatlly, it models a camera with a broad LWIR spectral response, a long decay (response) time, dead and hot pixels and fixed pattern noise.
Important Files
The Scene
The scene is a variant of the scene used in the NestedMotion1 demo. That scene included a set of 4 spheres orbiting about a central cube, and that 5 object system tracing a circular path about the origin of the scene. The modifications applied to the original scene were to assign some warm temperatures to the objects and lengthen the moment arms on spheres.
The Platform File
The demo.platform file contains the configuration for the camera system.
The parameters can be explored and configured in the graphical Platform
Editor, or directionly in the .platform file.
Decay time
An alternative to the traditional integration time, which features a uniform temporal response over an integration period, is the decay time. The decay is modeled as an exponential decay with a response constant that frequently spans a time period longer than the period between readouts of the array.
<temporalintegration tdi="false">
<decay>0.02</decay>
<samples>10</samples>
</temporalintegration>
Detector model
The detector modeling setup in this simulation is the primary focus. The detector has been configured with characteristics and artifacts commonly found in a LWIR camera featuring a microbolometer array. This includes a non-zero number of dead pixels that have no response and so-called "hot" pixels that are always at the maximum value. The detector model currently supports these are probabilities. The read and dark current noise can be adapted for these types of arrays as well. The fixed pattern noise is a more common artifact in these arrays and is modeled as a random process that is either row or column aligned (the fixed, additive bias is constant for that aligned axis) and is defined by a minimum and maximum value in output digital counts.
<detectormodel>
<quantumefficiency>0.8</quantumefficiency>
<darkcurrentdensity>0</darkcurrentdensity>
<readnoise>100</readnoise>
<deadpixelprobability>0.0001</deadpixelprobability>
<hotpixelprobability>0.0001</hotpixelprobability>
<minelectrons>4.5e+10</minelectrons>
<maxelectrons>1.5e+11</maxelectrons>
<bitdepth>12</bitdepth>
<fixedpatternnoise axis="1">
<minimumcount>48</minimumcount>
<maximumcount>128</maximumcount>
</fixedpatternnoise>
</detectormodel>
Platform Jitter
The demo.ppd file includes jitter so that the pointing of the camera
moves as a function of time. As a result, you can see how the fixed
pattern noise is constant and moves with the image. The correlated
jitter (vibrations with specific frequencies) also introduces patterns
in artifacts left by the swirling spheres as a function of time. As a
reminder, the integration of the detectors is performed by sampling
the scene as a function of time. Hence, the resulting blur or streaks
reflect the motion within the scene and motion of the platform during
the integration.
Simulations and Results
This section includes any step-by-step instructions for running and visualizing the simulation. This demo focuses on various effects in the sensor configuration, with a slight emphasis on the long temporal integration with a decay. To see the impacts of this, we run a series simulations with different integration setups.
Short Integration Time
The image below comes from a simulation with a short (1 ms) integration
time (see short_int.jsim). On this short time scale, the scene and
the motion of the platform are largely frozen. The image below shows
the basic layout of the scene:
Longer Integration Time
The image below comes from a simulation with a longer (10 ms) integration
time (see long_int.jsim). On this longer time scale, the scene and
the motion of the platform are significant during the integration. As
a result, the motion of the spheres creates blur and the motion of the
platform (from the jitter) produces blur in the background (arrows and
"N"):
Longer Decay Time
This final simulation switches from a traditional integration response (where the signal from the detector reflects a nearly equal temporal weighting of the signal during the integration) to a decay response common in a microbolometer type array. For this scenario, the array is still read out at a constant rate, but the signal is proportional to the signal received over a longer time period and with an exponential weighting. In this case, we are modeling a 20 ms decay time, which means the signal is due to effects over a nearly 100 ms time window, but is exponentially biased toward then end of that time window.
Since this simulation is in the thermal (where the radiance magnitude is lower) and requires more temporal sampling, the default convergence settings are not sufficient. Below is a frame from the simulation using the default convergence:
$ dirsig5 long_decay.jsim
The noise in the tails of the rotating spheres is a result of running out of samples in those pixels before the result could properly sample the temporal domain.
The image below results when running with a smaller radiance threshold (to reflect the lower radiance magnitude in the LWIR) and with a higher max samples (paths/pixel) to better sample the temporal domain.
$ dirsig5 --convergence=50,2500,1e-8 long_decay.jsim
The image features:
-
Streaks behind the rotating spheres,
-
Wobble in those streaks due to high-frequency (shorter time constant than the effective integration time) jitter in the platform,
-
Blur and "ghosting" in the background (arrows and "N") due to high-frequency (shorter time constant than the effective integration time) jitter in the platform,
-
Fixed pattern noise that is "column aligned", producing a vertical noise pattern.
To run the multi-frame (video) simulation, run the video.jsim setup
that uses a longer task window with the desired convergence parameters:
$ dirsig5 --convergence=50,2500,1e-8 video.jsim
These frames can be converted to PNG files using image_tool and then
encoded into video using ffmpeg: