Examples

The performance of the direct
discretization of discrete 3D surfaces is
demonstrated on several examples. The first example is rather an
academic one and investigates the performance^{10} of the implemented
algorithm on the unit hemisphere. The other two examples - a concrete
dam and Stanford bunny - are more realistic and are presented to
demonstrate also the range of capabilities of the algorithm.

In the first example, a set of uniform meshes over the unit hemisphere has been generated using three different uniform control grids. The control grids 1, 2, and 3 contain 408, 1680, and 6744, elements, respectively. For each control grid, seven meshes have been generated with the target element size 0.448, 0.224, 0.112, 0.056, 0.028, 0.014, and 0.007, respectively. Two sets of results have been produced - the first one (Fig. 54) with ``exact'' projection (EP) technique to the limit surface and the second one (Fig. 55) with ``approximate'' projection (AP) to the Bezier triangular patch (with ``exact'' projection in last cycle of the mesh smoothing). The convergence criterion has been set to 0.1 % of the local element size (for ``exact'' as well as ``approximate'' projections). The dependence of the computational time on the number of generated elements for individual control grids is depicted in Figures 54 and 55. Note that instead of using the total computational time, rather the part corresponding strictly to the mesh generation phase and the part related to just one cycle of the smoothing phase (either with ``exact'' or ``approximate'' projection) are used.

It is quite clear from Figure 54 that, when using only the
``exact'' projection, each time the size of
control triangles is halved the total computational time is reduced by
a constant value (specific for a particular element size). This can be explained by
the reduction of the level of the subdivision required by the convergence
criterion by one, whenever the control grid element size is halved. It is also
apparent that this results in computational complexity of
the overall algorithm. On the other hand, when using the ``approximate''
projection, the computational complexity of the mesh generation and
smoothing based on the ``approximate'' projection approaches
(the logarithmic contribution of the octree data structure is
negligible) similarly as was the case of the direct discretization of
parametric 3D surfaces (see
Subsection Computational
Complexity of Section ** Direct Triangulation of 3D Surfaces**).

An interesting issue to be addressed is, how much the limit surface deviates from the correct shape, or, in other words, how well geometrical information is preserved by the adopted modified butterfly subdivision scheme. The maximum inside and outside deviations of the nodes from the ideal unit hemisphere possessed by the initial control grids are in all cases around . The actual quality of generated meshes, in terms of the accuracy of the geometrical representation, is presented again as the maximum inside and outside deviations of the nodes from the ideal unit hemisphere in Figure 56. It reveals that the accuracy of the geometrical representation deteriorates (but not more than by one order of magnitude) with the decreasing number of elements in the initial control grid and initially also with the decreasing target element size. However, starting with a threshold element size (in this particular case 0.056), the deviations remain almost constant with respect to the further decreasing target element size. Similar observations can be made when the generated mesh is recursively used as the initial control grid for the next discretization.

In the next example, the body of a concrete dam (Fig. 57) has been discretized using five control grids, this time comprising 806, 1632, 3284, 6566, and 13128 triangular elements, respectively, into meshes of target element size 0.5, 0.22, 0.16, 0.11, 0.091, 0.079, 0.071, 0.065, and 0.06, respectively. Only the enhanced version with the ``approximate'' projection has been used in this case. The resulting dependence of the total computational time on the number of generated elements for individual control meshes (Fig. 58) approaches computational complexity and exhibits only negligible sensitivity to the density of the control grid (clearly caused by the last smoothing cycle with the ``exact'' projection). Note that the total computational time in this case includes the mesh generation using the ``approximate'' projection with the convergence limit set to 10 % (of the local element size), five cycles of the Laplacian smoothing based on the ``approximate'' projection with the convergence limit set to 1 %, and finally one cycle of the Laplacian smoothing utilizing the ``exact'' projection with the convergence tolerance set to 0.1 %. However, despite the achieved favourable computational complexity, the performance of the algorithm in terms of the number of elements generated per second is considerably worse when compared to the performance of the same algorithm on parametric 3D surfaces (compare for example Figs. 39 and 58).

As the last example, the discretization of Stanford
bunny^{11} is presented
(Fig. 60). Again, three control grids containing
11721, 27972, and 59333 elements, respectively, have been used to
triangulate the body of the bunny, this time, however, with the
curvature-based element size control.
The adopted projection strategy including the limit tolerances is the
same as in the case of the concrete dam in the previous example.
The relation of the total computational time and the number
of generated elements for individual control meshes
(Fig. 59) is very close to a linear profile and the mesh
generation phase is almost independent of the density of the control
grid. However the total elapsed time for the meshes generated over the
coarsest control grid is slightly larger, which is caused by a larger
overhead in the ``exact'' projection during the last smoothing cycle
due to the increased number of correction cases.

- ... performance
^{10} - All elapsed times have been obtained on a Dell notebook with Intel Pentium II 300 MHz processor and 128 MBytes RAM running under Linux Red Hat 5.2.
- ...
bunny
^{11} - Courtesy of Stanford data repository.

*Daniel Rypl
2005-12-07*