Next: Examples Up: Top Previous: Limit Surface Triangulation

Implementation Issues

The spatial localization for the intersection check and the front management, in terms of the edge selection and searching, are considered to be general bottlenecks of the advancing front procedure. Both aspects are addressed in the presented approach. The spatial localization is implemented using the octree data structure. This exhibits the computational complexity which becomes nearly constant for a reasonable octree depth. When searching for the most suitable candidate in the neighbourhood, the local 3D Delaunay property is used as a criterion. This reduces the probability of the consequent occurrence of an intersection. Since gradual variation of the element size is ensured, no special algorithm for the edge selection strategy is required. Therefore, the simplest approach has been chosen which always uses the first available edge in the front. This obviously leads to the constant computational complexity of the front management.

A crucial aspect of the proposed mesh generation strategy is related to the point-to-surface projection. Simple and efficient algorithms available for the projection to parametric surfaces cannot be adopted simply because of missing parameterization of the limit surface. Moreover, the situation is further complicated by the fact, that the normal to the limit surface can be evaluated only at the nodes of the original or refined control grid. Therefore in order to make the projection sufficiently accurate (in terms of the distance from the limit surface and match with the exact normal), it is necessary to subdivide the original control grid up to a high level. This results in a huge amount of data to be stored, which is not acceptable. In [5], an efficient and reliable approach for the projection of a point to the limit surface has been proposed. This approach is based on the localized progressive refinement of the control grid towards the actual projection (Fig. 3). A recursive implementation of this algorithm enables virtually an unlimited number of refinements with constant memory requirements to be performed. Note that the refinement is of temporary character and it is discarded after the projection is completed. It is necessary to realize that the shape of the triangles of the control grid on any level of refinement further evolves (the sides of the triangle become curved) as midside nodes on higher levels are introduced. A robust implementation of the projection therefore requires a reliable algorithm for the initial triangle localization and consequent sub-triangle selection (for the progressive refinement). Since the evolution of the triangle shape during the refinement can be hardly predicted, the implementation of a tool for the selection correction is of primary importance. This is achieved by tracing sub-triangle selection during the refinement to enable backtracking if an invalid selection is detected. The correction then consists in traversing the refinement history in the opposite direction and detecting the highest level on which a different selection (possibly leading to the correct projection) may be made.

Since some of the projections during the mesh generation can be accomplished with considerably lower accuracy (without any significant impact on the resulting mesh), an alternative approximate, but more efficient projection technique was suggested. This is based on approximating triangles of the control grid by Bezier triangular patches and on the employment of a standard projection technique applicable to parametric surfaces. In the current implementation, the quadratic Bezier triangles have been employed. Their control polygon can be easily calculated using only nodes from the first level of the global subdivision without the necessity to proceed to further levels of the subdivision. Note that keeping the order of the patch low increases the computational performance. On the other hand, quadratic patch is not capable to properly capture the change of the curvature from convex to concave. In such a case, the projection gives very unsatisfactory results. Therefore, the criterion whether to use a patch is based on the match between the patch normal and the normal calculated on the control grid using the derivative masks. Should the normals deviate by more than degrees at least at one of the patch vertices, the recursive subdivision based projection algorithm is used. However it is important to realize that for the final location of a node (typically in the last cycle of the mesh smoothing), the projection technique based on the recursive subdivision has to be used in order to comply with the constraint to the limit surface (unless it is planar).

With respect to the application of the above meshing technique to the limit surfaces reconstructed from the STL meshes, it should be noted that planar surfaces (seemingly the simplest case) must be handled in a special way. The reason is that planar surface, having zero curvature in all directions, prevents the use of the curvature based mechanism to tessellate it into STL faces. Therefore the aspect ratio and orientation (in plane) of individual faces cannot be related to the curvature. This allows two neighbouring faces forming the same planar surface to considerably differ in the aspect ratio measured with respect to the shared side. As a direct consequence, the limit surface folds over itself, possibly crossing the boundary of the surface. Even thought such a surface still seems to be planar, it is not any more, since the normal is changing orientation from point to point, which is fatal for the meshing algorithm. Therefore, when triangulating a planar surface, the subdivision is not invoked and faces of the control grid are used for localization purposes only.

Next: Examples Up: Top Previous: Limit Surface Triangulation

Daniel Rypl