Optimization of load of structure in order to obtain desired shape of letter T
(formulation for maximization)
Optimization of testing example
(formulation for maximization)

GRBFN strategy

• GRBFN is a stochastic optimization tool combining Radial Basis Function Network for approximation of cost function and genetic algorithm for optimization
• The particular genetic algorithm implemented here is real-coded algorithm GRADE with CERAF strategy
• The Radial Basis Function Network (RBFN) is used to interpolate an objective function while GRADE is used to find the global minimum of RBFN approximation
• The pseudo code of the algorithm:

 1 Create_initial_neurons(); 2 while (stopping criteria) { 3 Create_network(); 4 GRADE(); 5 Update_network(); }

• First the user needs to specify the initial neurons and to set the RBFN control parameters nstep and lambda
• nstep is number of RBFN improvements, whereas regularization factor lambda measures how far the algorithm is from interpolation and closer to approximation
• The neural network is created with one layer of neurons and the training of this net leads to the solution of a linear system of equations
• The radial basis function goes with the growing distance from the center to the zero. Therefore, the regression part was added to the code. There are two possibilities of a regression function. regpoly0 is the zero order polynomial regression function and the basis function goes with growing distance from the center to a mean value of an objective function. regpoly1 is the first order polynomial regression function and the basis function has a linear character in extrapolation
• If the regression part is not necessary then regpoly_off is used.
• The pseudo code of Create_network is shown in the following table:

 1 void Create_network(void) { 2 y = y - Linear_regression_part; 3 dmax = sqrt(n_dimension); 4 r = (dmax*(n_dimension*n_neurons)^(-1/n_dimension))^2; 5 for ( i=0; i

• dmax and r are parameters of basis functions. dmax is the maximal distance within the domain and is influenced by the number of dimension n_dimension
• For different scales in multidimensional spaces the basis functions produce different values. Therefore, a norm r is established to the code. This norm depends on dmax, number of dimensions of an objective function and on the number of neurons n_neurons
• The value of a basis function is influenced by its distance from the neuron's center. The basis functions have the Gaussian shape
• The weights of individual neurons are obtained by solving a linear system of equations

• The evolutionary algorithm GRADE with CERAF strategy is used to find the optimum of the RBFN approximation
• In early stages - when the number of neurons is too small -  the RBFN is not able to approximate sufficiently well. Therefore, new centers have to be added to minimize the difference between the RBFN and an objective function
• In the code Update_network, the three new centers are added. One center is the maximum found by GRADE. One is a random center and the last center is created by differential method. This center is added in the direction of the better of the two last optimization solutions

• The summary of algorithm's control parameters:

parameterdescriptiondefault value
nstep number of RBFN improvements300
lambda regularization factor0.0000001

• The GRBFN algorithm was tested on the set of twenty functions called "ANDRE"
• As mentioned above the regression part was added to code. This regression part changes the behavior of basis functions in extrapolation
• The Table below compares GRBFN without regression part, GRBFN with regpoly0 as regression function and GRBFN with regpoly1 as regression function
• Algorithms were started 100times and stopped when the optimum was found with the given precission or the function calls limit was reached. This limit was set to 1000 because of insufficient memory
• Number of runs, were optimum was found with given precision - so-called succes rate (SR) and average number of fitness calls (ANFC) over succesful runs are given in following Table:
 GRBFN GRBFN regpoly0 GRBFN regpoly1 Function Dim Optimum Precission SR ANFC SR ANFC SR ANFC F1 1 -1.12323 0.011232 100 23 100 28 100 28 F3 1 -12.0312 0.120312 100 43 100 45 100 46 Branin 2 0.39789 0.003979 100 51 100 24 100 62 Camelback 2 -1.03163 0.010316 100 61 100 50 100 54 Goldprice 2 3 0.03 100 217 100 397 53 725 PShubert1 2 -186.731 1.867309 78 573 96 547 78 579 PShubert2 2 -186.731 1.867309 98 540 0 --- 0 --- Quartic 2 -0.35239 0.003524 56 83 100 103 100 88 Shubert 2 -186.731 1.867309 100 499 100 500 100 513 Hartman1 3 -3.86278 0.038678 100 34 100 38 100 45 Shekel1 4 -10.1532 0.101532 0 --- 0 --- 0 --- Shekel2 4 -10.4029 0.104029 0 --- 0 --- 0 --- Shekel3 4 -10.5364 0.105364 0 --- 0 --- 0 --- Hartman2 6 -3.32237 0.033224 100 130 0 --- 0 --- Hosc45 10 1 0.01 --- --- --- --- --- --- Brown1 20 2 0.02 --- --- --- --- --- --- Brown3 20 0 0.1 --- --- --- --- --- --- F5n 20 0 0.1 --- --- --- --- --- --- F10n 20 0 0.1 --- --- --- --- --- --- F15n 20 0 0.1 --- --- --- --- --- ---

Published applications

• Black-Box Function Optimization using Radial Basis Function Networks is presented in paper [1]
• Comparison with elder SADE algorithm, GRBFN strategy and combination of GRADE algorithm with CERAF strategy on a set of twenty functions (ANDRE) is presented in my Ph.D. thesis [2]
• Combination with Radial Basis Function Network (GRBFN) to damage model parameters identification is presented in [3]

Matlab implementation

• this implementation is developped for Matlab 7.14.0 and it uses randsample function from Statistics Toolbox
• this implementation is considered for minimization of functions
• matlab implementation of RBFN part of algorithm consists of 4 following files:
• GRBFNoptimization.m(787B, version 0.0.2, released 17 Sep 2013) - main function of the algorithm
RBFNinit.m(1.96kB, version 0.1.2, released 30 Mar 2009) - RBFN initialization script
RBFNcreate.m(1.21kB, version 0.2.0, released 17 Sep 2013) - script creating the net and training this net
RBFNupdate.m(3.51kB, version 0.1.2, released 30 Mar 2009) - script adding new points
regpoly0.m(377B, released 12 Apr 2002) - zero order polynomial regression function taken from toolbox DACE
regpoly1.m(385B, released 12 Apr 2002) - first order polynomial regression function taken from toolbox DACE
regpoly_off.m(244B, version 0.0.1, released 17 Sep 2013) - script switching off the regression part of RBFN
• matlab implementation of GRADE part algorithm consists of 9 following files:
• GRADEplusCERAFoptimization.m(585B, version 0.0.3, released 17 Sep 2013) - main function of the GRADE part
GRADEinit.m(2.40kB, version 0.0.4, released 30 Mar 2009) - initialization script of the GRADE part algorithm
CERAFinit.m(1.46kB, version 0.0.5, released 24 Mar 2009) - initialization script of the CERAF
CERAFcheck.m(2.25kB, version 0.0.5, released 24 Mar 2009) - script checking for new local extreme
GRADEnewpop.m(1kB, version 0.0.2, released 6 Feb 2009) - script generating new population using mutation and cross operators
GRADEselect.m(1.20kB, version 0.0.4, released 30 Mar 2009) - script evaluationg new solutions and selecting new popution
Aconfig.m(437B, version 0.0.2, released 25 Mar 2009) - script initializing the approximation of objective function
Aeval.m(438B, version 0.0.1, released 30 Mar 2009) - script defining what should happend when new better solution is found, e.g. print or save the solution
Avalue.m(687B, version 0.1.0, released 17 Sep 2013) - script defining the approximation value
• to use matlab version of GRBFN algorithm, use must define 3 matlab files specifying your function to be minimized, for example like following files defining Branin function and F1 function:
• Branin.m(110B, version 0.0.2, released 24 Mar 2009) - definition of cost function
Braninconfig.m(1.19kB, version 0.0.4, released 17 Sep 2013) - configuration script specifying some cost function properties
Branineval.m(399B, version 0.0.3, released 30 Mar 2009) - script defining what should happend when new better solution is found, e.g. print or save the solution
F1.m(80B, version 0.0.1, released 13 Feb 2009) - definition of cost function
F1config.m(1.07kB, version 0.0.4, released 17 Sep 2013) - configuration script specifying some cost function properties
F1eval.m(450B, version 0.0.3, released 30 Mar 2009) - script defining what should happend when new better solution is found, e.g. print or save the solution
• an example of using optimization by GRBFN algorithm in Matlab is shown in file example.m(198B, version 0.0.1, released 30 Mar 2009)
• to download all above mentioned files in one: grbfn-matlab.zip (11.9kB, released 17 Sep 2013)

References

 [1] A. Kučerová, M. Lepš, and J. Skoček: Black-Box Function Optimization using Radial Basis Function Networks, Proceedings of the Eighth International Conference on the Application of Artificial Intelligence to Civil, Structural and Environmental Engineering, 2005 PDF (375kB) [2] A. Kučerová: Identification of nonlinear mechanical model parameters based on softcomputing methods, Ph.D. thesis, Ecole Normale Supérieure de Cachan, Laboratoire de Mécanique et Technologie, 2007, PDF (5.03MB),   prezentation (4.55MB),   BiBTeX entry [3] A. Kučerová, D. Brancherie, A. Ibrahimbegovic, J. Zeman and Z. Bittnar: Novel anisotropic continuum-discrete damage model capable of representing localized failure of massive structures. Part II: identification from tests under heterogeneous stress field. Engineering Computations. (2009), accepted for publication, e-print: arXiv:0902.1665