This is the listing of all the demonstrations that are distributed with DIRSIG. A DIRSIG "demo" is simple, stand-alone simulation that is designed to show the user how to use a specific feature in the DIRSIG model. In most cases, the demo scenarios use simple scenes, simple sensors, etc. to convey the point.

How to use a demo

For each demo, there is a link to a local ZIP file (installed with DIRSIG) containing all the files necessary to run the demo. The three easy steps to using demo are:

  1. Download the demo,

  2. Unpack the demo, and

  3. Run the demo

Download the demo

Download the ZIP file containing the demo by clicking the "Download this demo" link at the bottom of the demo summary. Please note that Internet Explorer users might need to select the right-click "Save Target As …" menu item to download the ZIP file into their account.

Unpack the demo

Unzip the demo files into your account (instructions to unzip the archive file are beyond the scope of this document). The demo will be self contained in a folder with the same name as the demo, and the demos are setup to run from where ever the files are placed.

Run the demo

There is a README.txt file in each demo that contains a description of the demo. To run the demo, simply load the .sim file in the DIRSIG Simulation Editor and click the Run button.

Static Scene Geometry

Geolocated Insertion via the GLIST file

Scene geometry, property maps (insert points) and the platform location can be specified using geocoordinates. DIRSIG supports geodetic (latitude, longitude and altitude), UTM, and ECEF coordinate systems in the GLIST file (to position geometry instances), the PPD file (to position and orient geometry) and in the scene file (to specify the insert point of draped maps).

thumbnails/GeoLocation1.png

Moving Scene Geometry

The following demos describe various ways to include moving geometry in a scene description.

Positioning an object with Delta Motion

This demo shows how to put moving geometry in to a scene using the Delta motion model. A car is shown driving across a flat ground plate using a .mov file.

thumbnails/DeltaMotion1.gif

Positioning an object with Generic Motion

This demo shows how to put moving geometry in to a scene using the Generic motion model. A car is shown driving in a circle on a flat ground plate using a .ppd file to describe the position and orientation vs. time.

thumbnails/GenericMotion1.gif

Positioning an object with Flexible Motion

This demo shows how to put moving geometry in to a scene using the Flexible motion model. A plane is shown flying across the scene with some semi-periodic "roll" (rotation about the along-track or heading axis) that is incorporated using a temporally correlated "jitter" model.

thumbnails/FlexMotion1.gif

Nested/Hierarchical Motion

This scene is used to demonstrate nested or hierarchical motion. In the scene, a 2 x 2 grid of balls (spheres) and cube rotate about a central point and that point orbits around a second point.

thumbnails/NestedMotion1.gif

Sub-object Motion #1

This demo shows how a component of an object can be assigned motion. In this case, the missile on a simple mobile missile launcher is commanded to transition from an erect position to a lowered position. In addition to showing how to associate motion with a part, this demo shows how to change the materials assigned to a part and how a part can be disabled.

thumbnails/SubObjectMotion1.gif

Sub-object Motion #2

This demo shows how a component of an object can be assigned motion. In this case, the rotor blades on a helicopter are put into motion using the Flexible motion model. In addition, this demo uses temporal integration on the focal plane (see the Temporal Integration demo for more info) to capture the motion blur on the moving blades.

thumbnails/SubObjectMotion2.gif

Built-in Scene Geometry

Built-in Geometry Objects via the ODB file

This scene demonstrates a set of the built-in geometry objects available in DIRSIG. The scene is constructed using an ODB file.

thumbnails/PrimitiveObjects1.png

Built-in Pile Objects via the ODB file

This scene shows some of the built-in geometry objects that can be used to form various types of "piles".

thumbnails/Pilings1.png

Built-in Geometry Objects via the GLIST file

This scene demonstrates a set of the built-in geometry objects available in DIRSIG. The scene is constructed using a GLIST file, which has an expanded set of built-in objects (compared to the ODB file) and allows the built-in objects to be instanced.

thumbnails/PrimitiveObjects2.png

Advanced Scene Geometry

Basic Vertex Normals

DIRSIG supports vertex normals in OBJ geometry files. In these files, a normal vector is associated with each vertex. DIRSIG spatially interpolates this vertex normal information across a polygon surface at run time.

thumbnails/VertexNormals1.png

Vertex Normals and Sun Glints

This demo provides examples for modeling sun glints off vehicles when the geometry and radiometry models are properly configured. For the geometry, two versions of the same vehicle are used. One has a single normal vector for each facet, while the other has a normal vector associated with each vertex. In the second case, vertex normal interpolation is employed, which continuously varies the normal across the facet. This minimizes the artifacts of using quantized geometry for a continuous surface and produces more realistic solar glints.

thumbnails/VertexNormals2.gif

Fast Clouds

This demo provides a working example of the fast clouds which allows the user to import a cloud field using the regular grid mechanism, assign volumetric materials to it and use the "fast" cloud radiometry solver. This feature to model clouds is explicitly called the "fast" cloud model because the radiative transfer within the cloud uses a direct solution for first order scattered solar radiance and approximates multiple scattering contributions. It is intended primarily for producing the effects of large-scale clouds on a scene (particularly shadowing), but not for accurate, detailed radiometry of the clouds themselves.

thumbnails/FastCloud1.png

Voxelized Geometry

This demo provides the user with an example of how to import a volumetric object (for example, a 3D plume, cloud, etc.) into DIRSIG. In this example, a 3D model of a flame is imported from an external model that provides gas density and temperature information.

thumbnails/RegularGrid1a.png thumbnails/RegularGrid1b.png

Using Decal Maps to Drape Geometry

This demo shows how to use a special kind of map to drape one set of geometry onto another. This is very useful for geometry that needs to follow the surface of another. In this demo, some road geometry is mapped onto an undulating surface.

thumbnails/DecalMap1.png

Using a Polygon to Auto-Populate Objects

The motivation for the polygon based random fill is to allow constrained areas of a scene (such as backyards or rooftops) to be randomly filled with similar items. In contrast to some of the other methods of doing this (such as basing it on a material map or density map), the polygon fill ensures that the same random placement is done individually for each polygonal area, which guarantees more uniform global statistics and speeds up sparsely filled areas.

thumbnails/PolyFill1.png

Using Density Maps to Auto-Populate Objects

The density map based random fill lets you take advantage of the mapping mechanisms to define populated areas that vary in density of the surface of an object. This can be useful in cases where you know roughly where objects should be, but don’t want to define specific locations nor want them to be uniformly placed either.

thumbnails/DensityMap1.png

Using Base Geometry and Material Populations #1

This demonstration focuses on mechanisms added to the GLIST file to facilitate rapid scene development based on (1) randomly created populations of base objects and (2) variations in material attribution for those base objects.

thumbnails/GlistPopulations1.png

Using Base Geometry and Material Populations #2

This example shows how a parking lot can be filled with cars using a small set of base geometries (different car models) and materials (different paints) to automatically generate a population of car variants. In addition, this demo also leverages some of the tools to the position cars generation from that population within the lot and place shopping carts between them.

thumbnails/Parking1.png

Bundling Materials and Maps with Geometry

This example shows how to create "bundled objects", which are folders that contain geometry and all the material and maps (material, texture, etc.). This approach allows collections of fully attributed objects to be created that can be injected into scenes without any need to manually merge material properties.

thumbnails/BundledObject1.png

Bundling Sources and Motion with Geometry

In this night scenario, a "suspect" car is stationary in a parking lot and a police car approaches (with headlights and spinning lights) and stops. This demo utilizes the "bundled object" approach allows objects to be encapsulated into folders that contain all geometry, material properties and maps (material, texture, etc.).

thumbnails/BundledObject2.gif

Optical Properties

The following demos describe methods to configure various optical properties for materials in scenes.

Advanced BRDF Examples

This demo contains examples for how to configure BRDFs for different types of man-made materials with strong directional reflectance properties.

thumbnails/Brdf1.gif

Data-Driven BRDF Example

This demo contains an example of the data-driven BRDF model at the core of DIRSIG5, but which was back-ported to DIRSIG4 for comparisons and DIRSIG4 utility. It stores the BRDF using a spherical quad tree (SQT) data structure, which has some unique and powerful features including adaptive resolution (level of detail), efficient storage, efficient hemispherical integration and efficient sampling mechanisms.

thumbnails/Brdf2.gif

Extinction Examples

This demo shows how to model a variety of volumes (using the built-in shapes) with unique spectral extinction properties.

thumbnails/ExtinctionProp1.png

Property Maps

UV Mapping with Built-in Geometry

This demonstration shows the UV mapping functionality. Instead of tiling property maps along the horizontal (X-Y) plane, they can be "wrapped" around geometry. The built-in sphere object has a default UV mapping associated with it. The facetized box geometry has a UV coordinate associated with each vertex.

thumbnails/UvMapping1a.png thumbnails/UvMapping1b.png

UV Mapping with OBJ Geometry

This demonstration shows a more advanced UV mapping example using an OBJ geometry model and a material map.

thumbnails/UvMapping2.png

Bump Map on a Sphere

This demo shows how to use a bump map to introduce normal fluctuations within a surface.

thumbnails/BumpMap1.png

Bump Map on a Plane

Another demo showing how to use a bump map to introduce normal fluctuations within a surface.

thumbnails/BumpMap2.png

Material Map

This demo provides a basic example of a material map, which is a tool that employs a raster image to map different materials to different locations on an object. This demo specifically highlights the setup of a material map and the impact of the "pure" vs. "mixed" material option.

thumbnails/MaterialMap1a.png thumbnails/MaterialMap1b.png

Material Map with Holes

This demo provides an example of a material map configuration that includes a "null" material, which creates a "hole" in the surface. The demo includes a camouflage net supported by spreader poles over an HMMWV vehicle, which uses a material map to distribute different color fabrics and empty space across the surface of the continuous net mesh.

thumbnails/MaterialMap2.png

Reflectance Map

This demo provides a basic example of a reflectance map, which is a tool that allows the user to use a spectral reflectance cube (perhaps derived from a calibrated sensor) to define the hemispherical reflectance of a material. This is a useful mechanism to incorporate a measured background into a scene.

thumbnails/ReflectanceMap1.png

RGB Reflectance Map

This shows how to use RGB image maps to drive the reflectance on a surface. In this case a sphere object is mapped with an RGB image of the Earth. Note that this technique results in materials that are limited to the visible region since information about the reflectance outside of this region are unknown.

thumbnails/EarthMap1.png

Mixture Map

This demo utilizes a mixture (or fraction) map to describe the materials on a terrain. The original map was created in Maya and stored as an RGB image, where the three 8-bit channels encoded the relative contribution of three materials at each location. This 24-bit mixture image was converted to a 3 band, floating-point image that DIRSIG expects as input.

thumbnails/MixtureMap1.png

Temperature Map

This demo shows how to use a raster image to define the temperature of an object. In addition to including a static scenario (temperatures that do not vary with time) this demo includes a dynamic scenario where the temperatures vary with time using a series of raster images.

thumbnails/TemperatureMap1a.png thumbnails/TemperatureMap1b.gif

Radiance Map

This demo shows how to use a spectral radiance image cube to define surface leaving radiances in lieu of a radiometry solver.

Coming Soon

Radiometry

Leaf Stacking Effect

The scene is composed of a background over which a stack of successively smaller "leaf planes" is placed. These "leaf planes" have a material optical properties that approximate those of real leaves. The impacts of the spectrally varying transmission at different wavelengths is then observed.

thumbnails/LeafStack1.png

Reflected Sky

This demonstration contains a mirrored hemisphere that reflects what a MODTRAN-driven sky irradiance field looks like at 8:10 AM local time. The mirrored hemisphere employs a unique material description to make a surface that is a perfect mirror and can be efficiently modeled by DIRSIG.

thumbnails/Skyview1.png

Reflectance Inversion

This scenario is supposed to demonstrate how the reflectance of a surface can be inverted from a spectral radiance image and an Atmospheric Database (ADB) file.

thumbnails/ReflectanceInversion1.png

Secondary Sources

Please consult the User-Defined Sources Manual for more information.

A Point Source

This demo shows users the basic configuration of secondary sources. The supplied files place an array of different point sources over a simple background.

thumbnails/Sources1.png

An Indoor Source

This demo models an indoor scene that contains a point source near the ceiling and a simple cut-out "window" (with no glass). The camera is placed inside the box-like hallway. Interior illumination is provided by both a point source and the window. Since this demo is about diffuse illumination, the direct viewing of the bulb is disabled via the options file.

thumbnails/Indoors1.png

An Extended Area Source

This demo shows how to setup an extended area source and discusses mechanisms for turning sources on and off.

thumbnails/Sources2.gif

Instancing Sources

This demo shows how to instance sources with other geometry in a scene. In this case, we have a vehicle with headlights. A single point source is instanced to make the two headlights and bundled with the vehicle geometry. Then the vehicle geometry is instanced in the scene, pointed in two directions and given some dynamic motion.

thumbnails/Sources3.gif

Blinking/Modulating Sources

This demo shows how to setup temporally varying sources. In the scene, a combination of blinking and modulating sources are setup. One is always on, two use the "blinking" parameters (frequency and time offset) and three are assigned power spectral density (PSD) descriptions to define modulation. To represent what might be observed in a large area with a 3-phase power grid, the three modulating sources are out of phase with each other by 120 degrees. The scene is then observed with a 2D array camera employing a 2400 Hz read-out rate so that the modulation of the AC sources can be observed.

thumbnails/Sources4.gif

Platform Mounts

Please consult the Instrument Mounts guide for more information.

Commanded Mount

This shows how to setup the directly-commanded mount, which allows the user to specify non-periodic, platform-relative pointing angles as a function of time.

thumbnails/CommandedMount1.gif

Lemniscate Scan Mount

This setup demonstrates how to use the "Lemniscate" mount to drive figure-8 style, platform-relative scanning.

thumbnails/LemniscateScanMount1.gif

Line Scan Mount

This setup demonstrates how to use the "line" mount to drive unidirectional, linear velocity, platform-relative scanning in the nominal across-track direction.

thumbnails/LineScanMount1.gif

Tabulated Mount

This setup demonstrates how to use the "tabulated" mount object to drive platform-relative scanning using measured or externally generated pointing data.

thumbnails/TabulatedMount1.gif

Tracking Mount

This demo shows how to use the "tracking" mount to always point at a specific target geometry instance. This mount dynamically accounts for both platform and target motion. It is useful for setting up scenarios such as a UAV with a camera ball that follows a vehicle.

thumbnails/TrackingMount1a.gif thumbnails/TrackingMount1b.gif

Scripted Mount

This demo shows users how to use the "scripted" mount object to drive platform-relative scanning that is driven from a user-supplied script.

thumbnails/ScriptMount1.gif

Whisk Scan Mount

This setup demonstrates how to use the "whisk" mount to drive bidirectional, sinusoidal, platform-relative scanning in the nominal across-track direction.

thumbnails/WhiskScanMount1.gif

Platform Jitter

Temporally Uncorrelated Jitter

We take what would otherwise be a non-moving, down-looking platform and jitter its position as a function of time using a normal distribution.

thumbnails/PlatformJitter1.gif

Temporally Correlated Jitter

We take what would otherwise be a non-moving, down-looking platform and jitter its position as a function of time using a temporally-correlated function.

thumbnails/PlatformJitter2.gif

Advanced Platform Concepts

Multiple Camera Payload

This demo which features four cameras on a single platform. Each cameras has a different attachment affine transform so they point in different directions with some overlap between all the cameras. DIRSIG automatically includes basic image geolocation information in the output ENVI header file, so the separate images can easily be mosaiced via standard software packages.

thumbnails/MultiCamera1.png

Bayer Pattern Focal Plane

This demo shows how to use the data-driven focal plane feature to model a Bayer pattern focal plane.

thumbnails/BayerPattern1.png

Data-Driven Clocking

This shows how to drive a "clock" object with external (and potentially irregular) trigger times.

thumbnails/ExternalTriggers1a.gif thumbnails/ExternalTriggers1b.gif

Temporal Integration (Scene Motion)

This demo describes how to enable temporal integration of pixels. This allows motion blur from scene object and/or platform motion to be included in the output data product. Although this demo focuses on motion blur of a fast moving object being captured by a 2D framing array camera, this feature can also be used to model many aspects of a time-delayed integration (TDI) focal plane, including the pushbroom architectures used in many remote sensing imaging platforms.

thumbnails/TemporalIntegration1a.png thumbnails/TemporalIntegration1b.png

Temporal Integration (Platform Motion)

This demo describes how to enable temporal integration of pixels. This allows motion blur from scene object and/or platform motion to be included in the output data product. This demo complements the TemporalIntegration1 demo. In that demo, the platform is fixed and the car in the scene is moving. In this demo, the platform is moving and the car in the scene is fixed. In both cases, the car looks blurred due to the relative motion between the car and platform.

thumbnails/TemporalIntegration2a.png thumbnails/TemporalIntegration2b.png

Rolling Shutter with Integration

This demo shows how a focal plane can be configured with a rolling shutter, where each line (row) of a 2D array is integrated and read out sequentially. In contrast to a global shutter (where all rows are read out at once), the rolling readout produces artifacts in the image when motion is present in the scene at frequencies proportional to the rolling shutter line rate. In this case, we are looking at a spinning rotor on a helicopter.

thumbnails/RollingShutter1.gif

Sub-Pixel Object Hyper-sampling

This demo shows how to improve the modeling of sub-pixel objects by enabling "hypersampling" when a pixel’s IFOV contains specific objects. The scene constains a pair of very small box targets that represent a fraction of a percent of the pixel by area. However, even with modest sampling across the array the correct fractional contribution of these small targets can be determined when hypersampling is enabled.

thumbnails/SubPixelObject1.png

Thermal

The following demos pertain to modeling the thermal regions of the EO/IR spectrum.

THERM Temp Solver

This scene demonstrates various ways to assign temperature to objects in a DIRSIG scene, and the units required on the input side.

thumbnails/Thermal1.png

Dynamic Shadows with THERM

This demo shows how the built-in THERM thermal model can model temporal shadow signatures. In this scenario, we have a simple scene with two boxes that we observe over a period of an hour. During that hour, the high thermal inertia background responds to the solar shadowing created by a stationary box (left) and a box that moves to a new location (center → right).

thumbnails/Thermal2.gif

Dynamic Heating and Cooling with THERM

This demo shows how the built-in THERM thermal model can model dynamic heating and cooling conditions. In this scenario, a pair of aircraft are repositioned. The first plane has been parked all day and then pulls forward to reveal thermal shadow "scar" on the concrete. A second, identical plane has been inside the hangar all day and then pulls forward into the sunlight to reveal its cooler surface temperatures. After the two planes quickly reposition, the simulation captures a series of frames over the next hour. During that hour the shadow left by the first plane slowly fades, the second plane heats up and new shadows develop beneath both planes.

thumbnails/Thermal3.gif

Data-Driven Temp Solver

This demo shows how the data-driven temperature solver and an external temperature vs. time file can be used to drive the temperature of an object in a DIRSIG simulation. In this demo, a simple utility pole with three transformer "cans" is constructed. One of the transformer "cans" is passively heated by the Sun (using the built-in THERM temperature solver) and the remaining two are driven by unique temperature vs. time data files. The simulation generates an LWIR image every hour over the course of a 24 hour period so that the variation in the various objects can be observed.

thumbnails/DataDrivenTempSolver1.gif

Balfour Temp Solver

This demonstrates the simple, empirical Balfour temperature solver that is available in DIRSIG. Specifically, it shows two plates side-by-side, where one plate is configured to use the THERM temperature solver while the other uses Balfour temperature solver.

thumbnails/Balfour1.png

Mapped THERM Properties

For a given material, the parameters fed in to the THERM temperature solver can be spatially varied using a property map image. This demonstration drapes such a property map over a terrain geometry.

thumbnails/MappedTherm1.png

Import MuSES Results

This demo shows how the import results from the MuSES temperature prediction and infrared signature model developed by ThermoAnalytics, Inc. DIRSIG can import both the geometry and the temperature results stored in a MuSES TDF file.

Important
MuSES support is only available on Windows.

thumbnails/Muses1a.png thumbnails/Muses2b.png

Temperature Map

See the demo in the Property Maps section.

LIDAR

Please consult the Lidar Modality Handbook for more information.

Nadir Looking, Single Pulse Example

This demonstrates a nadir viewing (down-looking) LIDAR system. In this case, we only shoot a single pulse at a tree object.

thumbnails/LidarStatic1.png

Whisk Scanning, Multiple Pulse Example

This demonstrates an airborne, whisk scan LIDAR system. The platform remains still while 5 pulses are sent and received in a simple scan pattern.

thumbnails/LidarWhisk1.png

Side Looking, Single Pulse Example

This demonstrates a side-looking LIDAR system, like one that might be mounted on a vehicle driving down the street and mapping the side of buildings. In this case, we only shoot a single pulse.

thumbnails/LidarSide1.png

Nadir Looking, Single Pulse, Advanced GmAPD Example

This demonstrates a nadir viewing (down-looking) LIDAR system using the Advanced GmAPD detector model. In this case, we only shoot a single pulse at a tree object.

thumbnails/LidarStatic2.png

Whisk Scanning, Multiple Pulse, Advanced GmAPD Example

This demonstrates a nadir viewing Geiger-mode Avalanche Photo-diode (GmAPD) laser radar system that whisk scans across the scene. The collection shoots 40 overlaping pulses over a scene composed of a tree on a background.

thumbnails/LidarWhisk2.png

Bi-Static Nadir Looking

This demonstrates an exaggerated LASER offset in a LIDAR system. In this case, we shoot a single pulse from a greatly offset laser (one that 2km behind the focal plane) and angled such that is is illuminating the space vertically underneath the sensor. The primary reason for this demo is to show a laser instrument that is completely independent from the gated focal plane instrument (they are on different mounts in the platform).

thumbnails/LidarBiStatic1.png

Sloped Surface Return

This demonstrates the broadening of a return from a sloped surface. The demonstration includes simulations of returns from both a flat (perpendicular to the beam) and sloped (not perpendicular to the beam).

thumbnails/LidarSlope1.png

Multi-Return Waveform Example

This demonstrates multiple returns from multiple objects (at multiple ranges) within a pixel. Specifically, this demo focuses on correctly configuring the pixel sub-sampling to achieve a realistic waveform.

thumbnails/LidarEdge1.png

Link Budget Verification

This is a LIDAR verification/validation demonstration. An Excel spreadsheet is included which shows a hand-computed, "expected" result which can be compared against the DIRSIG output.

thumbnails/LidarValidate1.png

Dynamic Range Gate

This demos shows how the user can drive a LIDAR simulation with external range gate data.

Incorporating Command and Knowledge Errors

This demos shows how the user can incorporate command errors (for example, jitter) and knowledge errors (for example, GPS or INS noise) into a LIDAR simulation. This demonstration includes 4 scenarios to explore the various combinations (with and without) of both command and knowledge errors.

thumbnails/LidarPointing1a.png thumbnails/LidarPointing1b.png

User-Defined Temporal Pulse Profile

This demos shows how the user can drive a LIDAR simulation with a user-defined temporal pulse profile. This demonstration includes two simulations: (a) a "clean pulse" scenario with an ideal Gaussian pulse profile and (b) an "after pulse" scenario that features the same primary pulse but also includes a lower magnitude after pulse that is 10% of the magnitude of the primary pulse and time shifted by 5 ns.

thumbnails/LidarUserPulse1a.png thumbnails/LidarUserPulse1b.png

Polarization

Basic Reflective Polarization

This scene demonstrates two of the BRDF models available in DIRSIG. The "chunky bar" (four flat-topped pyramids on a plate) has different roughness aluminum materials applied to each pyramid. The metal uses the Priest-Germer pBRDF model. The plate has a grass material applied, and uses the Shell Background model.

thumbnails/Polarization1a.png thumbnails/Polarization1b.png

Thermal Polarization

This demonstration is for simulating a thermal infrared, polarized system. The scene is similar to the "Beach Ball" scene described in Mike Gartley’s dissertation.

thumbnails/Polarization2a.png thumbnails/Polarization2b.png

Division of Aperture Polarization

This is a 4-camera "division of aperture" setup using the Modified-Pickering 0, 45, 90 and 135 filter set.

Division of Focal Plane Polarization

This is a single focal plane using a 2x2 micro-grid "division of focal plane" setup using the Modified-Pickering 0, 45, 90 and 135 filter set.

Water

Sub-Surface Radiance Verification

This demo recreates one of the experiments presented in "Comparison of numerical models for computing underwater light fields" by Mobley et al. [1993] (problem 6). The demo does not produce an image — instead a single detector is moved along a vertical profile in the water, measuring the upwelled radiance at each point.

thumbnails/OpenWater1.png

Waves and Caustics

This demo focuses imaging a sphere over a submerged disk in a dynamic wave height scenario. In addition to showing how to define the water radiometry solution and optical properties this demo shows how to use the built-in dynamic wave height model.

thumbnails/OpenWater2.gif

Space Situational Awareness (SSA)

Please note that using DIRSIG for SSA applications is still experimental. Please consult the SSA Modality Handbook for more information about current features and limitations.

Ground to Space ("Moon Sat") Scenario

This is a demonstration of looking at an exo-atmospheric object from a ground based sensor. In this case the object is an object that looks like the Moon (but is much smaller at only 16.8 km meters across) in a geo-synchronous orbit. This "moon sat" is modeled as a primitive sphere that uses the native UV mapping of the sphere to wrap a moon texture around it.

thumbnails/MoonSat1.gif

Ground to Space Scenario

The purpose of this demonstration is to show how to use the DIRSIG platform motion model to point track a moving space object in geosynchronous orbit. The space object of the demo is a simple CAD drawing of the "Anik F1" telecommunications satellite launched by Telesat from Canada. The position of the satellite is driven by the Two-Line Element (TLE) of the actual F1 spacecraft from http://space-track.org. The collection scenario is a ground observer in Canada tracking the Anik F1 over an 8 hour period (UTC 2.0 to 10.0 on 3/17/2009).

thumbnails/Ssa1.gif

Space to Space Static Scenario

This shows how an earth model can be added to a DIRSIG scene in order to provide additional illumination on to a space-based target. An earth-sized sphere is placed in the scene and then attributed using a reflectance map derived from NASA "Blue Marble" data. When imaging a space-based object, this earth geometry will cause additional illumination in the form of "earth shine".

thumbnails/Ssa2.png

Space to Space Tracking Scenario

This shows how to configure a scenario with a geosynchronous satellite tracking a Low Earth Orbit (LEO) satellite. Each of the satellite orbits is specified using it’s respective Two-Line Element (TLE).

thumbnails/Ssa3.gif

Synthetic Aperture Radar (SAR)

Please note that using DIRSIG for SAR applications is still experimental. Please consult the Radar Modality Handbook for more information about current features and limitations.

Stripmap collection of Corner Reflector Array

This demo includes a basic stripmap SAR platform configuration that collects an array of corner (trihedral) reflectors. The complex phase history file produced by DIRSIG can be focused by the supplied Matlab program to produce the final SAR image.

thumbnails/StripmapSar1.gif

Spotlight collection of Corner Reflector Array

This demo includes a basic stripmap SAR platform configuration that collects an array of corner (trihedral) reflectors. The complex phase history file produced by DIRSIG can be focused by the supplied Matlab program to produce the final SAR image.

thumbnails/SpotlightSar1.gif

DIRSIG5 Specific

Normal Map

This demo focuses on the use of "normal maps" to emulate high resolution surface topology on low resolution geometry models. Similar to bump maps, a normal map can be used to spatially modulate the surface normal vector across an otherwise flat surface. This is accomplished by encoding the XYZ normal vector into an RGB image. In this example, a pair of real 3D objects (a hemisphere and a truncated pyramid) are featured and a second pair of the same objects are emulated using a normal map.

thumbnails/NormalMap1.gif

Generating Image Chip Sets for Machine Learning

In order to facilitate robust training of machine learning algorithms, we want a large number of images featuring the content of interest. These image "chips" should span a wide range of acquisition parameters. This plugin was created to streamline the generation of image chips with "labels" by automating the sampling of a user-defined parameter space in a single simulation rather than relying on an infrastructure of external scripts to iterate through the parameter space.

thumbnails/ChipMaker1.png

Water plugins for a static surface and IOP_MODEL poperties

This scene starts to introduce the water plugins for DIRSIG5 and demonstrates the new radiative transfer engine in the context of caustics formed at the bottom of a stepped pool (for the clear water scenario). Two scenarios are provided, a turbid water case and a clear water case.

thumbnails/Caustics1.png