When, how and why do different neural models learn the same representations?
New findings in neuroscience and artificial intelligence reveal a shared pattern: whether in biological brains or artificial models, different learning systems tend to create similar representations when subject to similar stimuli.
The emergence of these similar representations is igniting a growing interest in the fields of neuroscience and artificial intelligence, with both fields offering promising directions for their theoretical understanding.
These include analyzing the learning dynamics in neuroscience and studying the problem of identifiability in the functional and parameter space in artificial intelligence.
While the theoretical aspects already demand investigation, the practical applications are equally compelling: aligning representations allows for model merging, stitching and reuse, while also playing a crucial role in multi-modal scenarios. Furthermore, studying the features that are universally highlighted by different learning processes brings us closer to pinpoint the invariances that naturally emerge from learning models, possibly suggesting ways to enforce them.
The objective of the workshop is to discuss theoretical findings, empirical evidence and practical applications of this phenomenon, benefiting from the cross-pollination of different fields (ML, Neuroscience, Cognitive Science) to foster the exchange of ideas and encourage collaborations.
In conclusion, our primary focus is to delve into the underlying reasons, mechanisms, and extent of similarity in internal representations across distinct neural models, with the ultimate goal of unifying them into a single cohesive whole.