Effective Statistical Learning Methods for Actuaries III by Michel Denuit & Donatien Hainaut & Julien Trufin

Effective Statistical Learning Methods for Actuaries III by Michel Denuit & Donatien Hainaut & Julien Trufin

Author:Michel Denuit & Donatien Hainaut & Julien Trufin
Language: eng
Format: epub
ISBN: 9783030258276
Publisher: Springer International Publishing


Algorithm 5.1 Kohonen’s algorithm for quantitative variables

Remark that the neural map admits a double representation. One is on a pavement [0, 1] × [0, 1] and positions of neurons are fixed. The other one is in where the positions of nodes are determined by the p-vectors ω u for u = 1, …, l 2. Kohonen (1982) proposed a procedure to construct this map that is recalled in Algorithm 5.1.

The algorithm scans the portfolio and finds for each policy the neuron with the closest codebook to its features. This neuron is called the best matching node (BMN). After this step, weights of neurons in the neighbourhood of this BMN are updated in the direction of the policy features. The size of the update is proportional to the epoch of the algorithm and to the distance between the BMN and updated neurons. Notice that functions 𝜖(e) and σ(e) in Eqs. (5.3) and (5.5) may be any other decreasing functions of the epoch, e. The total distance d total is the error of classification if we use the feature ω BMN(i) for the ith policy, instead of the real one x i. This distance is monitored to check the convergence of the algorithm. When it does not vary anymore, the learning of the neural net is finished.

The speed of convergence of the Kohonen’s algorithm depends on initial weights ω u(0). They should be chosen in order to reflect as much as possible the largest set of features of policies.1 The convergence also depends on parameters, 𝜖 0, θ 0 and σ 0 of Eqs. (5.3)–(5.5). If they are too high, weights of neurons oscillate during the first iterations. If 𝜖 0, θ 0 and σ 0 are too small, modifications of the codebook are not enough significant. In both cases, the number of epochs must be increased to achieve convergence.

To illustrate this section, we apply the Kohonen algorithm to data from the Swedish insurance company Wasa, presented in Sect. 1.​11. We build a map of the portfolio that regroups policyholders according to the owner’s age and vehicle age rescaled on the interval [0, 1] (ages are divided by their maximum values). The number of epochs is e max = 100 and the grid of neurons counts 9 elements (l = 9). The initial codebooks are chosen in order to regularly cover the [0, 1] × [0, 1] pavement. The parameters of Eqs. (5.3)–(5.5) for the update of codebooks are: 𝜖 0 = 0.01, θ 0 = 1 and σ 0 = 0.10. These values have been chosen by trials and errors in order to ensure a quick convergence.

Figure 5.2 shows the Kohonen’s map in the space of variables, in which policies are identified by a dot. Each colored area represents a group of policies centered around a neuron (black dots).

Fig. 5.2Kohonen’s map of the portfolio, with 9 neurons. The segmenting variables are the owner’s and vehicle ages. Each policy is represented by a dot. Black dots point out the position of neurons. The colors identify the area of influence of neurons



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.