We
utilized our proposed learning block (Figure 2, B-C ) to realize
neuromorphic representation with PES learning, using the input signal
itself as a reference signal (Figure 3, A ). We used the system
to encode and decode exponential and sinusoidal signals with two, four,
and eight OZ neurons (Figure 3, B-D ). As expected, followingEquation 4 , as the number of neurons increases, the learning
system’s performance improves. Our hardware simulation-derived results
(Figure 3, D, red traces ) closely follow Nengo’s NEF-based
software simulation (Figure 3, D, purple traces ), with a
cross-correlation similarity (sliding dot product) of 0.872±0.032. We
show that an analog learning system comprising only 8 OZ neurons can
accurately represent the input with a swift convergence toward the
represented value.
As described above, representation is highly dependent on neuron tuning.
The results shown in Figure 3, B-D were derived using neurons
with a bounded activation distribution. We further represented the
sinusoidal input with neurons characterized by uniform and pure
activations, following Figure 1, C. The results are shown inFigure 3, E. We evaluated this representation using the three
activation schemes with one to eight neurons by calculating the error’s
root mean square (RMS). Our results demonstrate the superior performance
for a bounded distribution of neuron tuning (Figure 3, F ). The
continually changing weights of each neuron are shown in Figure
3, G , demonstrating continual online learning.