Call For Papers

Discover why, when and how distinct learning processes yield similar representations, and the degree to which these can be unified.

Neural models, whether biological or artificial, tend to develop structurally similar representations when exposed to semantically similar stimuli. This convergence is a common yet enigmatic phenomenon, sparking increasing interest in Neuroscience, Artificial Intelligence, and Cognitive Science. This workshop aims to get a unified view on this topic and facilitate the exchange of ideas and insights across these fields, focusing on three key points:

When: Understanding the patterns by which these similarities emerge in different neural models and developing methods to measure them.

Why: Investigating the underlying causes of these similarities in neural representations, considering both artificial and biological models.

What for: Exploring and showcasing applications in modular deep learning, including model merging, reuse, stitching, efficient strategies for fine-tuning, and knowledge transfer between models and across modalities.

🔭 Topics of Interest 🔬

A non exhaustive list of the preferred topics include:

Model merging, stitching and compositionality Techniques and strategies for combining, aligning, or transferring knowledge between neural models.
Representational alignment Methods, theories, and measures that pertain to the alignment of representations across different neural networks.
Identifiability in neural models Studying the existence of nonlinear data representations that are optimal (and unique) with respect to specific downstream tasks.
Symmetry and equivariance in NNs Their role in the emerging similarity of neural representations across different models.
Learning dynamics Understanding the processes and dynamics that lead to the emergence of similar representations during training.
Multimodal learning Techniques that fuse information from multiple modalities towards a multimodal representation.
Disentangled representations Techniques for learning representations that capture generative factors of variation underlying the data distribution.
Multiview representation learning Approaches for learning shared representations from data acquired under different conditions or modalities.
Representation Similarity Analysis Assessing the similarity of neural representations of different subjects by considering their responses given similar stimuli.
Linear Mode Connectivity Exploring the (linear) connectivity of trained models in the weight space.
Similarity based learning Learning strategies that leverage representation similarity for improved performance.
Similarity measures in NNs Metrics to assess similarity in neural models both at the weight and representation level.

📝 We gathered resources and references on the topics of interest in this UniReps GitHub repository. Pull requests are more than welcome!


🔴 Full Paper (Archival)

This track will address complete papers to be published in a dedicated workshop proceedings volume (last year’s proceedings can be found here).

Full papers should be at most 9 pages in main text length (i.e., excluding references and appendix) and anonymized.

🔵 Extended Abstract (Non-Archival)

This track aims to address early-stage results, insightful negative findings, and opinion pieces.

Extended abstracts should be at most 4 pages in main text length (i.e., excluding references and appendix) and anonymized.


Both tracks will be featured in the workshop poster session to give the authors the opportunity to present their work, and a subset of the submissions will be selected for a spotlight talk session during the workshop.

For both tracks, please make sure to use the NeurIPS LaTeX template. For camera ready submissions please refer to the section below.

For each track, we will award Best papers and Honorable Mention submissions, providing free student full registrations to NeurIPS 2024.

Camera Ready instructions

For camera ready submissions download the necessary template and instructions files at the following link

📰 Blog Post Track

Inspired by the ICLR blogpost track, we also invite submissions for blog posts that discuss positions, recent advances, open problems, reproducibility results, and future directions in topics of interest for the workshop.

Selected blog posts will be published on the UniReps official website to showcase high-quality science works that will help the community exchange ideas.

More details about the submission procedure are coming soon 🔜.

⚠️ Important Dates 📅

  • Technical paper submission deadline: Sep 20, 2024 (AoE) Sep 23, 2024 (AoE) – Submit on OpenReview
  • Final decisions to authors: Oct 09, 2024 (AoE)
  • Camera Ready deadline: Nov 04, 2024 (AoE) Nov 06, 2024 (AoE)

👥 Join the Program Committee

We believe that maintaining high review quality is crucial, and one of the best ways to achieve this is by keeping the review load manageable for each reviewer: last year, we ensured that no reviewer was assigned more than two papers, with at most one full paper, thanks to our great Program Committee. To continue this approach, we are seeking new members of the Program Committee who share a passion for these topics and are eager to contribute.

Additionally, believing reviewers to deserve recognition for their hard work, we’re working to introduce prizes for outstanding reviewers. Stay tuned for more details!

If you’re interested in joining our Program Committee, sign up here!