Next: Material Modeling Up: Top Previous: Top

Introduction

After opening the market with electronic computers civil engineers were among pioneers utilizing this technology preferably in the area of structural mechanics. The reason was very simple. The slope - deflection method was widely used by practicing engineers for the analysis of frame structures. The method was appropriate for simple algorithmization and for coding. There was developed a lot of codes for the analysis of frame structures. Several years after introducing computers on the market engineers discovered finite element method, at the moment general tool for the analysis of problems in civil, mechanical, electrical, chemical and environmental engineering. The development of the first systematic design tool was also connected with civil engineers. ICES (Integrated Civil Engineering System) was developed on MIT. The real milestone in increasing the popularity of computer method in engineering was the introduction of the first PC computer with advanced graphical opportunity. During that elapsed time the things changed significantly. The power of nowadays standard PC's is very comparable to the power of former biggest mini and mainframes. Now it is very difficult to imagine any design without computer support. The aim of this paper is to show the opportunity how to use parallel technology in computational mechanics in civil engineering applications in the near future with the special attention to the cluster technology. The solution of complex sophisticated problems to model various phenomena with sufficiently high accuracy and in reasonable time makes the parallel processing attractive for a large family of applications, including structural analysis. However it is important to realize that most of traditional algorithms are inherently not suitable for parallelization because of their development for sequential processing. The most natural way for parallelization is the decomposition of the problem being solved in time or space. The individual domains are then mapped on individual processors and are solved separately ensuring the proper response of the whole system by appropriate communication between the domains. An efficient parallel algorithm requires a balance of the work (performed on individual domains) between the processors while maintaining the interprocessor communication (typical bottleneck of parallel computation) at a minimum. Since the last decade the parallel computation has become quite feasible due to the following three aspects. Firstly, a lot of new algorithms, suitable for parallel processing, have been developed (including efficient algorithms for domain decomposition). Secondly, the parallel computation ceased to be limited to parallel supercomputers (equipped with high technology for even higher price) but can be performed on ordinary computers interconnected by network into computer cluster. Such a parallel cluster can even outperform the supercomputers (as IBM SP2, SGI Origin etc) while keeping the investment and maintenance costs substantially lower ! And thirdly, several message passing libraries (typically MPI, PVM), portable to various hardware and operating system platforms, have been developed, which allows to port the parallel applications almost to any platform (including multiplatform parallel computing cluster).

Next: Material Modeling Up: Top Previous: Top

Daniel Rypl
2005-12-03