Post

Selection in Deep Learning

Lately, I’ve been increasingly fascinated by a simple but powerful idea in Deep Learning: selection. In this post, I want to highlight several research directions where selection plays a central role.

Selecting Layers for Robust Finetuning:

Instead of finetuning all layers for adversarial robustness, recent work shows that adapting only a small set of critical layers can lead to better robustness and more stable optimization. Paper

Selecting Tokens for Video Understanding:

Transformers have built-in attention, but recent video models show that explicitly removing irrelevant tokens is efficient and can sometimes outperform attention alone. Pruning noisy visual patches helps the model focus on what truly matters. Paper

Selecting Clients in Federated Learning:

In federated learning, filtering out harmful or low-quality clients before updating the global model is essential and is one of the fundamental challenges. Paper

Selecting Replay Samples in Continual Learning:

In real-world systems, models must be continually updated. Key questions include when to refresh old knowledge and which past samples to replay. Paper

Subset Selection:

My interest in this topic began when I came across the ICML 2021 workshop “Subset Selection in Machine Learning: From Theory to Applications,” where Prof. Baharan Mirzasoleiman emphasized a key insight: as data grows, selection becomes essential for both efficiency and learning quality. This resonated deeply with me and eventually led to my ICML 2024 paper, which proposed a novel data selection strategy showing that carefully chosen subsets can perform equal to or better than using the full dataset. Paper

A Unifying Pattern:

Across these diverse problem areas, a common theme emerges: deep learning advances not only by learning more, but by choosing what matters.

This post is licensed under CC BY 4.0 by the author.

Trending Tags