Spiking Neural Networks (SNN)

This group of algorithms trains artificial intelligence systems by tuning the states and parameters of artificial neurons and synapses, facilitating learning new behavior by achieving new homeostasis.

SNN algorithms leverage the plasticity nature of ANN systems.

Plasticity is the ability of a neural network to quickly change its predictions in response to new information. It is essential for the adaptability and robustness of artificial intelligence systems.

We utilize the following algorithms within the SNN framework:

  • Spike-Timing-Dependent Plasticity (STDP): STDP represents the most conventional form of SNN training. In this approach, the learning process is driven by the timing of spikes between two interconnected neurons. As an unsupervised model, STDP is particularly well-suited for artificial intelligence systems where precision is not the primary focus.

  • Backpropagation-based direct training schemes: This is one of the most applied learning algorithms in artificial intelligence systems. The algorithm tests for errors working back from output nodes to input nodes. These methods are considered among the most effective approaches for training networks because of their high accuracy.

  • Supervised temporal learning: In these models, the classification process depends on the firing times of output neurons. Being supervised models, they tend to be more precise than STDP. However, these models are more suitable for binary classification, as they are less effective for multi-class classification tasks.

  • ANN-to-SNN conversion strategies: A network is first trained in the ANN framework, and the trained network is then converted into an SNN. Though the conversion process results in a slight loss, this approach is extremely beneficial because ANN training schemes are very mature and yield high accuracy. Moreover, this approach is suitable for ultra-large networks since it is not as complex as direct training.

Last updated