[Seminar] MLDS Unit Seminar 2024-9 by Mr. Jose Restom (Mohamed bin Zayed University of Artificial Intelligence: MBZUAI, UAE), Dr. Mohammad Sabokrou, OIST at D23, Lab5
Description
Speaker 1: Mr. Jose Restom (Mohamed bin Zayed University of Artificial Intelligence: MBZUAI, UAE)
Title: Handling Data Heterogeneity via Architectural Design for Federated Visual Recognition
Abstract: Federated Learning (FL) is a promising research paradigm that enables the collaborative training of machine learning models among various parties without the need for sensitive information exchange. Nonetheless, retaining data in individual clients introduces fundamental challenges to achieving performance on par with centrally trained models. Our study provides an extensive review of federated learning applied to visual recognition. It underscores the critical role of thoughtful architectural design choices in achieving optimal performance, a factor often neglected in the FL literature. Many existing FL solutions are tested on shallow or simple networks, which may not accurately reflect real-world applications. This practice restricts the transferability of research findings to large-scale visual recognition models. Through an in-depth analysis of diverse cutting-edge architectures such as convolutional neural networks, transformers, and MLP-mixers, we experimentally demonstrate that architectural choices can substantially enhance FL systems' performance, particularly when handling heterogeneous data. We study visual recognition models from five different architectural families on four challenging FL datasets. We also re-investigate the inferior performance convolution-based architectures in the FL setting and analyze the influence of normalization layers on the FL performance. Our findings emphasize the importance of architectural design for computer vision tasks in practical scenarios, effectively narrowing the performance gap between federated and centralized learning.
Speaker 2: Dr. Mohammad Sabokrou, Staff Scientist, OIST
Title: Universal Novelty Detection Through Adaptive Contrastive Learning
Abstract: This talk focuses on the critical task of novelty detection for deploying machine learning models in real-world scenarios. A key aspect of novelty detection methods is their universality, or their ability to generalize across various distributions of training and test data. Distribution shifts can occur in either the training or the test set. Training set shifts involve training a novelty detector on a new dataset and expecting strong transferability, while test set shifts pertain to the method's performance when encountering a shifted test sample.
Our experimental results reveal that existing methods struggle to maintain universality due to their rigid inductive biases. To address this, we aim to develop more generalized techniques with adaptable inductive biases. By leveraging contrastive learning, we create an efficient framework to switch and adapt to new inductive biases through appropriate augmentations in forming negative pairs.
We introduce a novel probabilistic auto-negative pair generation method, AutoAugOOD, combined with contrastive learning to establish a universal novelty detection method.
This talk is based on our findings from two papers: Universal Novelty Detection Through Adaptive Contrastive Learning, CVPR2024 and Enhancing Anomaly Detection Generalization through Knowledge Exposure: The Dual Effects of Augmentation.
Add Event to My Calendar
Subscribe to the OIST Calendar
See OIST events in your calendar app