Analog circuit elements (e.g., resistors, capacitors, transistors) are
prone to process, voltage, and temperature (PVT) variations. ”Process”
in this case refers to manufacturing as a measure of the statistical
variability of the physical characteristics from component to component
as they come off the production line (ranging from variations in mask
alignment and etching times to doping levels). These variations affect
the electrical parameters of the components, such as the sheet and
contact resistance. Analog components also change in time to their
endurance limit (the stress level below which an infinite number of
loading cycles can be applied to a material without causing fatigue
failure). Here, we used Monte Carlo-driven variations to study: 1. The
way our hardware design handles a high degree of component variation;
and 2. To compare the traditional variation-based spanning of a
representation space with the programmed neurons’ tuning approach. In
each simulation run, all components in our circuit design were varied
within an explicitly defined variation rate (e.g., in the 5% variation
case study, the 10 nF capacitors featured in our OZ circuit design will
randomly be specified in the 9.5- 10.5 nF range). Transistors were
similarly varied in their sizes. The level of process variation
increases as the process size decreases. For example, a fabrication
process that decreases from 350 nm to 90 nm will reduce chip yield from
nearly 90% to a mere 50%, and with 45 nm, the yield will be
approximately 30% . Here, we simulate 100 Monte Carlo runs with 3, 5,
and 7% variability. The resulting neurons’ tuning in the bounded
distribution of intercepts and firing rates and with a single setpoint
(used for the variation-based spanning of
representation
space) are shown in Figure 6, A. The results show that the
intercepts are much more prone to variation than the neurons’ firing
rate. Importantly, we show that relying on process variation for the
manifestation of neurons with heterogeneous tuning curves is inadequate
compared to a predefined distribution of neuron tuning (Figure
6, B ). These results further demonstrate that our learning circuit
design can compensate for process variation.
Circuit Emulator
To efficiently demonstrate our circuit design on a large scale, we
designed a neural emulator. Our emulator is a scalable Python-based
framework designed to support compiling, testing, and deploying OZ-based
SNNs, supporting PES-based learning as described above. The emulator is
time-based with a predefined simulation time and number of steps. At
each step, the emulator’s scheduler traverses a list of SimBase objects,
activating them. The SimBase object structure constitutes the network
design, and it is up to the user to define. Each SimBase object is aware
of the simulation time step via a configuration class. Its
responsibility is to process the input data received via a voltage or a
current source interface object. Following each activation step, each
object stores its resulting state. Each building block (learning block,
error block, etc.) has a corresponding model created using its SPICE
simulation with varying input signals. Blocks can be built
hierarchically. For example, the OZ neuron block comprises the pulse
current synapse block, which comprises a current source. The emulator is
schematically shown in Figure 7, A and is available in
<will be provided upon acceptance >, where
further implementation details are given.