UNDERGRADUATES: OPEN PROJECTS

Project 1: Seizure Detection

Summary: In collaboration with The International Conference on Artificial Intelligence in Epilepsy and Other Neurological Disorders (2025), EPFL, ETH, and their partners are organizing a seizure detection challenge. This initiative builds upon prior research that demonstrated the strong performance of Vision Transformer (ViT)-based methods in seizure prediction. Extending these approaches to seizure detection presents an exciting opportunity for further advancements. The proposed methodology involves converting EEG signals into scalograms and employing cross-channel techniques for signal processing. This may include incorporating a mixture of experts within attention blocks, cross-attention mechanisms, and related approaches to enhance the model’s performance. Interested participants can find more details on the challenge website: Seizure Detection Challenge.

Project 2: Interpretable Forecasting For Medical Applications

Summary: Inspired by a presentation at ICLR 2024 in Vienna by Prof. Mihaela van der Schaar, this project focuses on developing interpretable forecasting models for medical applications. In healthcare, model interpretability is often a prerequisite for deployment, which has led to the preference for simple systems such as Support Vector Machines (SVM) over neural networks. This project aims to explore the potential of Bayesian methods to bridge the gap between performance and interpretability. By leveraging Bayesian techniques, it may be possible to develop robust forecasting models that meet the stringent requirements of medical applications while achieving superior performance.

Project 3: FGL As Temporal Attention 

Summary: The current implementation of Future Guided Learning (FGL) involves performing uncertainty quantification through a separate model designed for next-step forecasting. This project seeks to integrate this mechanism directly into the model by employing temporal attention techniques. Previous research has demonstrated the feasibility of such mechanisms, but this work will focus on leveraging attention maps as a form of memory for long-sequence forecasting tasks.The objective is to utilize these attention maps to enhance long-term prediction capabilities, potentially improving performance in applications requiring extended temporal forecasting.