Next: Model Regularization Up: Top Previous: Top

Introduction

The finite element method has become the most powerful tool for structural analysis. During the last decades, the method has matured to such a state that it is massively used in practical engineering for the solution of broad range of problems starting from linear elasticity up to highly non-linear transient simulations of the behaviour of real materials and complex structures. However, the quality of the obtained solution is dependent on many aspects (including the adopted spatial and time discretization, material model, equation solver and its parameters, etc.). Thus it is very important to keep the solution error under control. This can be conveniently (and usually also most economically) accomplished by the application of the adaptive analysis.

A very natural goal of the adaptive finite element analysis is to calculate solution of the governing partial differential equation(s) with uniformly distributed error not exceeding a prescribed threshold in the most economical manner. This is achieved by improving the discretization in areas where the finite element solution is not adequate. It is therefore essential to have a quantitative assessment of the quality of the approximate solution and a capability of discretization enrichment. The most common error estimators can be classified into two basic groups: (i) the projection-based estimators [1][2] and (ii) the residual-based estimators [3][4][5][6]. These error estimation strategies are well established for linear problems and many of them have been more or less successfully generalized for application in nonlinear problems. Nevertheless, most of them lose the sound theoretical basis when applied to nonlinear problems because the rely on properties that are valid only for linear problems.

There are three main directions of the adaptive discretization enrichment. The first one, a natural way for most engineers, is the h-version [7][8][9], which refines the computational finite element mesh while preserving the approximation order of the elements. The p-version [10] keeps the mesh fixed but increases hierarchically the order of the approximation being used. The hp-version [8][11][12] is a proper combination of h- and p-versions and exhibits an exponential convergence rate independently of the smoothness of the solution. However, its implementation is not trivial. Similarly, the treatment of higher order elements in the p-version is rather complicated, especially when nonlinear analysis is considered.

Quasi-brittle materials, such as concrete, rock, tough ceramics, or ice, are characterized by the development of large nonlinear fracture process zones. Modeling progressive growth of microcracks and their gradual coalescence leads to constitutive laws with softening, i.e., with a descending branch of the stress-strain diagram. In the context of standard continuum mechanics, softening leads to serious mathematical and numerical difficulties. The boundary value problem becomes ill-posed, and numerical solutions exhibit a pathological sensitivity to the finite element mesh.

The most advanced and potentially most efficient techniques ensure objectivity of the numerical results by enriching the standard continuum by supplying additional information about the internal structure of the material. Such regularization techniques can enforce a realistic and mesh-independent size of the region of localized strain. Therefore they are called localization limiters. A wide class of localization limiters is based on the concept of a nonlocal continuum [13][14][15][16], which was introduced into the continuum mechanics in the sixties and applied as a localization limiter in the eighties. A differential form of the nonlocal concept was exploited by various gradient models [17][18]. Nonlocal formulations were elaborated for a wide spectrum of models, including softening plasticity [19], damage models [14][15][20][16], smeared crack models [13], or microplane models [21].

The application of adaptivity paradigm to problems with material models regularized by nonlocal continuum concept results in computationally very demanding analysis in terms of both computational time and computer resources (memory, disk space, etc.). These demands can be alleviated by performing the analysis in a parallel computing environment. Typical parallel application decreases the demands on memory and other resources by spreading the task over several mutually interconnected computers and speeds up the response of the application by distributing the computation to individual processors. Note however that parallel computing is worth also for applications that require almost no resources but consume an excessive amount of time and for applications that cannot be performed on a single computer regardless of the computational time.

Since the last decade the parallel computation has become quite accessible due to the following three aspects. Firstly, a lot of new algorithms, suitable for parallel processing, have been developed (including efficient algorithms for domain decomposition). Secondly, the parallel computation ceased to be limited to parallel supercomputers (equipped with high technology for even higher price) but can be performed on ordinary computers interconnected by network into computer cluster. Such a parallel cluster can even outperform the supercomputers while keeping the investment and maintenance costs substantially lower ! And thirdly, robust message passing libraries (typically MPI), portable to various hardware and operating system platforms, have been developed, which allows to port the parallel applications almost to any platform.

The aim of this paper is to show the integration of individual components of the h-adaptive analysis into a single framework that can be used for adaptive simulation of quasi-brittle failure on memory distributed computing platforms. The remainder of the paper is structured as follows. Section 2 describes the regularization of the adopted material model using the nonlocal continuum concept. In Section 3 the philosophy of the residual-based error estimation is recalled. The employed refinement strategy is described in Section 4. The parallelization concept (with support for nonlocal material models) is summarized in Section 5. Section 6 then outlines the individual components of the h-adaptive analysis and their integration into a functional unit. Several implementation details are mentioned in Section 7. The application of the presented adaptive framework in a parallel computing environment based on workstation cluster is demonstrated on an example in Section 8. And finally, some concluding remarks are made in Section 9.



Next: Model Regularization Up: Top Previous: Top

Daniel Rypl
2005-12-03