Methods

NEF is based upon three fundamental principles: representation, transformation, and dynamics. Here, we show that our hardware PES-driven analog design can be used to implement these principles.

Neuromorphic Representation with NEF

NEF brings forth a theoretical framework for neuromorphic encoding of mathematical constructs with spiking neurons, allowing for the implementation of functional large-scale neural networks . It provides a computational framework with which information, given in terms of vectors and functions, can be transformed into a set of interconnected ensembles of spiking neurons. In NEF, spike train \(\delta_{i}\) of neuron \(i\) in response to a stimulus \(x\) is defined as follows:
\(\delta_{i}\left(x\right)=G_{i}\left[\alpha_{i}e_{i}+J_{i}^{b}\right],\ \ \)(1)
where \(G_{i}\) is a spiking neuron model, \(\alpha_{i}\) is a gain term, \(e_{i}\) is the neuron’s preferred stimulus (encoding vector) and\(J_{i}^{b}\) is a fixed background current. An ensemble of neurons can encode a high-dimensional vector, which can be linearly decoded as\(\hat{x}\ \)according to:
\(\hat{x}=\sum_{i}^{N}{a_{i}(x)d_{i}}\), (2)
where N is the number of neurons, \(a_{i}(x)\) is the postsynaptic low pass filtered response of neuron to the stimulus \(x\), and \(d_{i}\) is a linear decoder that was optimized to reconstruct \(x\) using least-squares optimization. Neuron’s postsynaptic response is defined in NEF as:
\(a_{i}\left(\mathbf{x}\right)=\sum h_{i}*\delta_{i}(t-t_{j}\left(x\right))\), (3)
where\({\ h}_{i}\) is the synaptic response function (usually an exponential with a time constant determined by the neurotransmitter type at the synapse), * is the convolution operator, and\(\delta_{i}(t-t_{j}\left(x\right))\) is the spike train produced by neuron \(i\) in response to stimulus \(x\), with spike times indexed by\(j\).
Importantly, when the representation is distributively realized with spiking neurons, the number of neurons dramatically affects performance and stability. This is referred to as decoder-induced static noise\(S_{N}\), and it is proportional to the number of neurons \(N\)according to:

\(\mathbf{S}_{\mathbf{N}}\mathbf{\sim}\frac{\mathbf{1}}{\mathbf{N}^{\mathbf{2}}}\)(4)

Neuromorphic Transformation with NEF

Equations (1) and (2) describe how vectors are encoded and decoded using neural spiking activity in neuronal ensembles. Propagation of data from one ensemble to another is realized through weighted synaptic connections, formulated with a weight matrix. The resulting activity transformation is a function of \(x\). Notably, it was shown that any function \(f(x)\) could be approximated using some set of decoding weights df . Defining \(f(x)\) in NEF can be made by connecting two neuronal ensembles A and B via neural connection weights\(w_{\text{ij}}(x)\) using:
\(w_{\text{ij}}=d_{i}\otimes e_{j}\), (5)
where \(i\) is the neuron index in ensemble \(A\), \(j\) is the neuron index in ensemble \(B\), \(d_{i}\) are the decoders of ensemble A, which were optimized to transform \(x\) to \(f(x)\), \(e_{j}\) are the encoders of ensemble B, which represents \(f(x)\) and \(\otimes\) is the outer product operation.

Prescribed Error Sensitivity

Connection weights, which govern the transformation between one representation to another, can also be adapted or learned in real time rather than optimized during model building. Weight adaptation in real time is of particular interest in AI, where unknown environmental perturbations can affect the error. One efficient way to implement real-time learning with NEF is using the prescribed error sensitivity (PES) learning rule. PES is a biologically plausible supervised learning rule that modifies a connection’s decoders \(d\) to minimize an error signal \(e\). This error signal is calculated as the difference between the stimulus and its approximated representation: \(\hat{x}-x\). The PES applies the update rule:
\(d=\lambda e\delta\), (6)
where \(\lambda\) is the learning rate. Notably, it was shown that when\(a-\lambda\left\|\delta\right\|^{2}\) (denoted \(\gamma\)) is larger than \(-1\), the error \(e\) goes to 0 exponentially with rate γ. PES is described at length in .

Neuromorphic Dynamics with NEF

We recently described neuromorphic dynamics with NEF in . System dynamics is a theoretical framework concerning the nonlinear behavior of complex systems over time. Dynamics is the third fundamental principle of NEF, and it provides the framework for using SNNs to solve differential equations. It is essentially an integration of the first two NEF principles: representation and transformation, where transformation is used in a recurrent scheme. A recurrent connection (connecting a neural ensemble back to itself) is defined using\(x\left(t\right)=f\left(x\left(t\right)\right)*h(t)\). A canonical description of a linear error-correcting feedback loop can be described using\(\frac{\text{dx}}{\text{dt}}=\text{Ax}\left(t\right)+Bu(t)\), where \(x\left(t\right)\) is a state vector, which summarizes the effect of all past inputs, \(u\left(t\right)\) is the input vector,\(B\) is the input matrix, and \(A\) is the dynamic matrix. In NEF, this standard control can be realized by using:
\(\frac{\text{dx}}{\text{dt}}=A^{{}^{\prime}}x\left(t\right)+B^{{}^{\prime}}u\left(t\right),\)(7)
where \(A^{{}^{\prime}}\) is the recurrent connection matrix, defined as\(\tau A+I\), where \(I\) is the identity matrix, τ is the synapse decaying time constant, and \(B^{{}^{\prime}}\) is the input connection, which is defined as τB.
An oscillator is a fundamental dynamical system. A two-dimensional (2D) oscillator, which alternates the values of \(x_{1}\) and \(x_{2}\), at a rate \(r\), can be defined recurrently using:
\(\par \begin{pmatrix}x_{1}\\ x_{2}\\ \end{pmatrix}=\par \begin{pmatrix}1&r\\ -r&1\\ \end{pmatrix}\par \begin{pmatrix}x_{1}\\ x_{2}\\ \end{pmatrix}=Ax.\) (8)
To achieve an oscillatory dynamic in which\(\frac{dx_{1}}{\text{dt}}=rx_{2}\) and\(\frac{dx_{2}}{\text{dt}}=-rx_{1}\), the following recurrent connections:\(\ x_{1}=x_{1pre}+rx_{2}\) and\(x_{2}=x_{2pre}-rx_{1}\) are defined, achieving\(x_{1}=\frac{r}{\tau}x_{2}\) and \(x_{2}=\frac{-r}{\tau}x_{1}\). Implementing this model without inducing some initial value \(x_{1}\) or\(x_{2}\) will result in a silent oscillator, i.e. , it will stand still at \((0,0)\). However, when a stimulus is applied—even a very short stimulus—the oscillator is driven to oscillate indefinitely. A leaky oscillator can be defined by introducing \(\kappa\) as a dumping factor:
\(\par \begin{pmatrix}x_{0}\\ x_{1}\\ \end{pmatrix}=\left(A-\kappa I\right)\par \begin{pmatrix}x_{0}\\ x_{1}\\ \end{pmatrix}\) , (9)
where \(I\) is the identity matrix.

OZ NEF-inspired spiking neuron