An Introduction to Neural Networks by Unknown

An Introduction to Neural Networks by Unknown

Author:Unknown
Language: eng
Format: epub, pdf
Published: 0101-01-01T00:00:00+00:00


6.4. ADAPTIVE RESONANCE THEORY

71

F2

M neurons

+

+

G2

j

+

+

Wb

Wf

F1

N neurons

+

+

G1

i

reset

+

+

input

Figure 6.13: The ART1 neural network.

The pattern is sent to F2, and in F2 one neuron becomes active. This signal is then sent

back over the backward LTM, which reproduces a binary pattern at F1. Gain 1 is inhibited,

and only the neurons in F1 which receive a òne' from both x and F2 remain active.

If there is a substantial mismatch between the two patterns, the reset signal will inhibit the

neuron in F2 and the process is repeated.

Instead of following Carpenter and Grossberg's description of the system using di erential

equations, we use the notation employed by Lippmann (Lippmann, 1987):

1. Initialisation:

wjib(0) = 1

wijf(0) = 1

1 + N

where N is the number of neurons in F1, M the number of neurons in F2, 0 i < N,

and 0 j < M. Also, choose the vigilance threshold , 0

1

2. Apply the new input pattern x

3. compute the activation values y of the neurons in F2:

0

N

y X

i =

w

0

ijf(t) xi

(6.30)

j=1

4. select the winning neuron k (0 k < M)

5. vigilance test: if

wkb(t) x

x x >

(6.31)

where denotes inner product, go to step 7, else go to step 6. Note that wkb x essentially

is the inner product x x, which will be large if x and x are near to each other

6. neuron k is disabled from further activity. Go to step 3

7. Set for all l, 0 l < N:

wklb(t + 1) = wklb(t)xl

wlkf(t + 1) =

wklb(t)xl

1

N

2 + Pi=1 wkib(t) xi

72

CHAPTER 6. SELF-ORGANISING NETWORKS

8. re-enable all neurons in F2 and go to step 2.

Figure 6.14 shows exemplar behaviour of the network.

backward LTM from:

input

output

output

output

output

pattern

1

2

3

4

not

not

not

active

active

active

not

not

active

active

not

active

not

active

Figure 6.14: An example of the behaviour of the Carpenter Grossberg network for letter patterns.

The binary input patterns on the left were applied sequentially. On the right the stored patterns (i.e.,

the weights of Wb for the rst four output units) are shown.

6.4.3 ART1: The original model

In later work, Carpenter and Grossberg (Carpenter & Grossberg, 1987a, 1987b) present several

neural network models to incorporate parts of the complete theory. We will only discuss the

rst model, ART1.

The network incorporates a follow-the-leader clustering algorithm (Hartigan, 1975). This

algorithm tries to t each new input pattern in an existing class. If no matching class can be

found, i.e., the distance between the new pattern and all existing classes exceeds some threshold,

a new class is created containing the new pattern.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.