This is the manual for the command-line scape_tool program that is distributed with DIRSIG and can be used to generate polygon meshes for terrains from a variety of input raster and point elevation datasets.

Background

This tool is primarily derived from code developed at Carnegie Mellon University (CMU) in the late 1990s. Mike Garland released the source code for a program named scape as a demonstration of the algorithms (and the implementation of those algorithms) described in the paper "Fast Polygonal Approximation of Terrains and Height Fields" written by himself and Paul Heckbert. CMU maintains the original website for all this work, including softcopy versions of the paper and the original scape code.

The original tool focused on producing a triangular irregular network (TIN), which is a polygon mesh of triangles with irregularly spaced vertexes. This irregularity allows the mesh vertex density to be denser where there is more structure that needs to be represented by closer points to capture stronger vertical gradients. The general algorithm for producing a TIN involves evaluating the current mesh and choosing a location to insert a new vertex that cuts an existing triangle into two or more new triangles. There are multiple algorithms for choosing where to insert a new vertex. The original tool included a standard Delaunay triangulation algorithm, but it also introduced a set of "data dependent" algorithms. These alternate algorithms would insert a new vertex in an attempt to minimize mesh fit metrics such as height or slope errors.

The scape_tool described here is simply an enhanced version of the original code written by Garland. These enhancements include the following additions:

  • The program originally supported only input raster datasets using the STM format.

    • The program now supports ENVI image files as input using any datatype (although the core algorithm and data structures operate on 16-bit data, so 32-bit and floating point elevation data is scaled into 16-bits).

    • The program also supports irregularly spaced ASCII/Text point data. This allows point data from ladar systems to be used to create terrain meshes.

  • The program originally supported only output polygon meshes using a simple, non-standard format.

Useful Supporting Tools

The following additional tools are useful when working on converting from point or raster elevation data to facetized terrains:

  • The Geospatial Data Abstraction Library (GDAL) toolbox. GDAL is not only a library of useful functions for geospatial data manipulation, but also a toolbox of useful command-line tools for manipulating and converting georeferenced raster and vector data in commonly used formats.

  • The Meshlab tool is a graphical tool for processing and editing of facetized object data. It has decent support for importing common 3D mesh data formats, it has an expansive library of tools to process (filter, convert, etc.) loaded data, and it can save to the Alias/Wavefront OBJ file format, which is the most commonly used format for importing 3D geometry into a DIRSIG scene.

  • The DIRSIG object_tool utility is a command-line tool that can be used to perform an number of common operations on facetized object data.

Usage and Options

The options for the tool can found by running the command without any options:

$ scape_tool
usage: scape_tool [input options] --obj_filename=<filename>

To specify the maximum number of points to add:
  --max_vertexes=<#>

To use an STM format input file:
  --stm_filename=<filename>
To use an ENVI image format input file:
  --img_filename=<filename>
To use an ASCII/Text XYZ format input file:
  --xyz_filename=<filename>

To specify the input coordinate frame for an XYZ file:
  --xyz_frame=<scene,geodetic> (default is 'scene')
To specify the size for a gridded XYZ file:
  --xy_size=<X,Y>
To specify the image GSD or sampling to regularize XYZ data:
  --gsd=<value>

To UV texture coordinates to facets:
  --add_uv
To specify the material label for the facets:
  --mat_label=<string>

Maximum Vertexes

The primary variable impacting the output of the facet mesh is the number of vertexes inserted into it. The minimum number is 4, resulting in a pair of triangular facets that span the area. Any number greater than 4 results in additional vertexes and triangles that use them. The maximum number of vertexes that can be used is equal to the number of elements in the raster grid or internally generated grid when using XYZ data. Hence, requesting more than N x M vertexes when the grid is N x M is overkill. Ideally, the triangulation will find that there are constant slope areas that can be represented by larger facets, and reduce the total number of facets required to represent the surface. Hence, the user should specify a number less than the number of elements in the grid.

The default value is equal to the size of the grid.

Input File Options

The tool supports different input file formats via different options:

--stm_filename

Use a binary, raster STM file as the input.

--img_filename

Use a binary, raster ENVI file as the input.

--xyz_filename

Use an ASCII/Text, point cloud file as the input. Each line constrains a space-delimited triplet of non-uniformly spaced XYZ values.

General Options

The following options can be used with any of the input data options:

--add_uv

The output OBJ file will include a simple 2D UV coordinate system that spans the XY bounding box (min/max) of the terrain.

--mat_label

The output OBJ file will assign the supplied alpha-numeric ID (label) to all the facets in the terrrain.

XYZ Specific Options

The following options are specifically used when XYZ point data is used as input (the --xyz_filename option). The XYZ input data can be either irregularly sampled (for example, raw data from a scanning lidar that has not been snapped to a grid) or regularly sampled (for example, data that was created by converting a raster elevation map (e.g., a DEM, DTED, etc.) to XYZ values). When irregularly spaced points are provided, the data will be binned into an internal grid for triangulation. In this case, the --gsd option is used to define the size of the grid cells in meters (assuming the input points are also in meters). When regularly space data is provied, use the --xy_size option to specifiy the dimensions of the grid the data came from.

--xyz_frame

If the input XYZ triplets are lat/lon/alt triplets (degrees), then this option must be specified with the geodetic value so that the data can be projected into a Scene ENU coordinate system. This will result in a facet mesh that captures the curvature of the Earth. The default for this option is scene, which assumes the input data is already in the Scene ENU coordinate system.

--xy_size (for regularly spaced points)

If the XYZ data was derived from gridded data (i.e., converted from a raster elevation model), then use this option to specify the dimensions of that original grid. When using this option the input data is assumed to row-major order.

--gsd (for irregularly spaced points)

The grid is sized to span the data and the ground sampling distance (GSD) of the grid is automatically chosen as 1/sqrt(avg_point_density). If you want to override that sampling in the internal grid, use this option.

Common Workflows

This section outlines workflows using a series of tools to convert common data formats into a terrain mesh.

Using a ASCII/Text Point Cloud

A common usage is to create a terrain mesh from a point cloud produced from airborne LIDAR or digital photogrammetry, structure from motion, etc. image-based workflows.

In this case, we have the point cloud as an ASCII/Text file formatted as space-delimited, XYZ triplets with each triplet (point) on a unique line:

$ scape_tool --max_vertexes=100000 --gsd=1 --mat_label=terrain --xyz_filename=terrain.xyz --obj_filename=terrain.obj
# Input XYZ Filename = terrain.xyz
# Input XYZ Frame = scene
# Output OBJ Filename = terrain.obj
# Maximum number of vertexes = 100000
# Using Delaunay triangulation
# XYZ File:
#     Point count = 32577754
#     X range =  -122.012 - 122.012
#     Y range =  -124.428 - 124.428
#     Z range =  -38.403 - 4.334
#     Scene size = 244.024 x 248.856 x 42.737
#     Scene area = 60726.837 m^2
#     Point density = 536.464 points/m^2
#     GSD = 1.000 meters/element (user-supplied)
#     Grid size = 245 x 249
#     Number of grid elements        = 61005
#     Number of used grid elements   = 56786
#     Number of unused grid elements = 4219
#     Percentage of used elements    = 93.084%
#     Average number of points in grid elements = 573.693
# 4219 elements ignored in input height field
# no more candidates

Using Maximum Vertexes

In this case, we set the --max_vertexes to 100000 and the --gsd to 1.0 meters. This resulted in a 245 x 249 grid to span the scene or 61,005 grid elements. Since 100000 > 61005, a vertex is eventually added at every single point in the grid (with ~39000 to spare). Hence, the program finishes with the message:

# no more candidates
Tip The message 4219 elements ignored in input height field indicates that some elements in the grid contained no data. This commonly occurs because the grid is axis-aligned to the bounding box of the data but there might not be data that spans the entire area of the box.

The point of a TIN mesh is to use fewer facets to represent the terrain than brute force adding a vertex for every element in the grid. This is achieved by recognizing that some regions of the mesh will be constant slope and can be represented by larger facets that span grid elements. If we repeat the run above but with --max_vertexes=50000 (where 50000 < 61005), then the algorithm will eventually run out of vertexes to insert and the program finishes with the message:

# Used all 50000 vertexes!

Using GSD

In the example above, the GSD for the internally constructed grid was specified at 1.0 meters via the --gsd option. If this option is not supplied then the tool will automatically compute a value using the average point density:

Auto-computed grid resolution using the average point density.
# XYZ File:
#     Point count = 32577754
#     X range =  -122.012 - 122.012
#     Y range =  -124.428 - 124.428
#     Z range =  -38.403 - 4.334
#     Scene size = 244.024 x 248.856 x 42.737
#     Scene area = 60726.837 m^2
#     Point density = 536.464 points/m^2
#     GSD = 0.043 meters/element (computed)
#     Grid size = 5653 x 5764
#     Number of grid elements        = 32583892
#     Number of used grid elements   = 8775220
#     Number of unused grid elements = 23808672
#     Percentage of used elements    = 26.931%
#     Average number of points in grid elements = 3.712

The input point cloud in this case is very non-uniform. Due to a lot of vertical structures in the scene, there are regions with a large number of points distributed vertically and regions that are sparsely sampled horizontally. As a result, the average point density is a poor estimate of the actual point uniformity and, hence, a bad metric to use to automatically compute the GSD. What we notice in this run is that a small fraction (26%) of the grid elements contain data compared to the 93% fraction from the run with a user-supplied GSD of 1.0 (where 1.0 >> 0.043). Although heavily influenced by the unique uniformity characteristics of any given point cloud, it is recommended that the the grid contain > 95% used elements.

Using a GeoTIFF or DTED input DEM

It is common for a DEM to stored in a GeoTIFF image container. The workflow for importing a GeoTIFF DEM is as follows:

Convert the GeoTIFF or DTED to Point Data

The first step is to convert the GeoTIFF or DTED raster data file to an ASCII/Text XYZ file using the gdal_translate tool from GDAL:

$ gdal_translate n40_w089_1arc_v2.tif -of XYZ n40_w089_1arc_v2.xyz

or

$ gdal_translate n40_w089_1arc_v2.dt2 -of XYZ n40_w089_1arc_v2.xyz

The output file consists of 3 columns: +W longitude (degrees), +N latitude (degrees) and altitude (meters).

Get the Dimensions of the Data

Since we are converting a raster data source to XYZ points, we will want the dimensions of the original raster data to give to scape_tool later. We can use the gdalinfo tool to get this info:

$ gdalinfo n40_w089_1arc_v2
Driver: GTiff/GeoTIFF
Files: n40_w089_1arc_v2
Size is 9169, 13489
...
[additional output deleted for documentation purposes]
...

From this we can see the original raster elevation model is 9169 x 13489.

Generate a TIN from the Point Data

The next step is to generate a triangular irregular network (TIN) from the XYZ point data using the scape_tool included with DIRSIG:

$ scape_tool --xyz_filename=n40_w089_1arc_v2.xyz --xyz_frame=geodetic \
    --xy_size=9169,13489 --obj_filename=n40_w089_1arc_v2.obj

The --xyz_frame option tells the tool that the input XYZ data is in geodetic coordinates (latitude, longitude and altitude). The tool will then convert it from this absolute coordinate system to a scene ENU coordinate system. In addition to handling the horizontal conversion, it will correctly introduce the curvature of the earth as geoid relative altitudes are converted to the Cartesian altitudes of the Scene ENU coordinate system. Since the XYZ point data is already gridded (it came from a raster image) the --xy_size option was included with the previously found dimensions of the raster image data.

Optional Post-Processing

This section outlines some optional post-processing steps that might be performed depending on the needs of the user.

Adding Vertex Normals

Vertex normals are normal vectors associated with the vertexes shared by facets rather than associated with the facets themselves. When present, the normal across a facet is interpolated (based on position within the facet) from the normal at the vertexes defining the facet. This interpolation has the effect of smoothing the surface without adding additional (finer resolution) geometry.

There are several options for adding vertex normals, but the easiest is to use the object_tool utility included with DIRSIG:

$ object_tool --input_filename=terrain.obj --addvertexnormals --output_filename=terrain_vn.obj

An alternative is to use the free and open-source tool Meshlab. The advantage of using Meshlab is you get a 3D render of the terrain mesh and can visualize the impacts of using the vertex normals:

  • Select FileImport Mesh to import the OBJ file produced in the last step.

  • Run the FiltersNormals, Curvature and OrientationCompute Vertex * Normals tool

    • This will add vertex normals for a smoother appearing terrain.

  • Select FileExport Mesh to export the mesh back to the OBJ file.