A computational neuroscientific perspective on encoding, optimisation, free energy, and architectural principles. I review select papers presented during the "Theory towards Brains, Machines and Minds" Workshop held by the RIKEN Centre for Brain Science 15–16 October 2019.

Changelog

Updated 21/10/19: ENet added as implementation of an asymmetric encoder-decoder.

Asymmetry in the auditory cortex

Experimental methodology: calcium imaging of the mouse auditory cortex during a perceptual decision-making task

Finding Possible implementations in AI
Encoder neuron behaviour is modulated by reward expectation and stimulus probability, while decoder neuron behaviour is not. The activations that are driving output behaviour are either parallel to decoder processing, or are modulated in downstream areas. Reinforcement learning might benefit from asymmetric fine-tuning, where only encoder weights are fine-tuned. Alternatively, we might mimic this using skip connections. In one branch, the encoder would connect directly to output modules, in effect skipping the decoder. Another would connect the encoder to output modules through the decoder.
A relatively larger number of encoder neurons than decoder neurons participate in the task. Asymmetric encoder-decoder architectures, where encoder layer size or depth is higher than those of the decoder.

Found in ENet, a real-time semantic segmentation algorithm.
Encoder neurons vary their thresholds more than decoder neurons.

Therefore, decoder neuron behaviour more closely resembles the expected prediction for the constant synaptic weights hypothesis (stable synaptic weights), than dynamic synaptic weights hypothesis (weights change frequently).
Differentiated learning weights, where encoders learn faster than decoders.