The
electrical circuit constituting the learning block is shown in Figure 2, C . The learning block circuit comprises a voltage
divider (accounting for a learning rate, colored blue), two multipliers
(colored purple), and a weight update module (colored orange). Analog
multipliers were implemented by subtracting the outputs of two analog
squaring circuits. One squaring circuit is driven by the summation of
the two signals (\(x,\ y\)) and the other by their difference,
following: \({(x+y)}^{2}-\left(x-y\right)^{2}=4xy\). A
differential amplifier further modulates the resulting value to factor
out the constant. The diode bridge operates in an extensive frequency
range, and its square law region is at the core of the squaring circuit.
The left diode bridge handles \(x+y\) and the right bridge handles
(\(x-y\)) (\(y\) is negated with an inverting op-amp). The squaring
circuit’s output current can be approximated with Taylor’s series. As
the differential output across the diode bridges is symmetric, each
bridge’s output comprises the even terms of the combined Taylor
expansions. Odd terms are removed due to the four diode currents, as
they produce frequency components outside the multiplier’s passband.
Therefore, the resulting output of the circuit is proportional to the
square of its input.
The first multiplier multiplies the normalized error with the neuron’s
temporally integrated spikes, constituting a weight update. Weights are
implemented with a memory cell (transistor-capacitor), allowing the
maintenance of negative values at low overhead. Using a recurrently
connected summing amplifier, the weight update circuit sums the updated
value with its current weight value. The second multiplier multiplies
the weight with the neuron’s temporally integrated spikes providing the
neuron’s output.
Circuit Simulation
In this section, we show that our hardware PES-driven analog design can
be used to implement NEF’s three fundamental principles: representation,
transformation, and dynamics (described above). The results below were
generated using SPICE, with the exceptions of Figures 7-8 , in
which the results were generated using our Python-based emulator
(described below), and Figure 3D, where the purple traces were generated
using Nengo.
Representation
In NEF-driven representation, input signals are distributivelyencoded with neurons as spikes (following each neuron’s tuning)
and decoded by either calculating a set of decoders
(Equation 2 ) or learning a set of weights (Equation 5 )
via PES learning (Equation 6 ). In both cases, neuromorphic
representation entails a reference signal (supervised learning).