Focused on state-of-the-art deep learning models for MR images enhancement (denoising and super-resolution)
Designed a CNN architecture that leverages the attention mechanism of Vision Transformers and recovers more details compared to the solution being used in the product
Worked on self-supervised methods applied to speaker and language recognition while doing monthly "lightning" talks about my progress (supervised by Dr. Réda Dehak)
Developed a label-efficient non-contrastive speaker verification model that outperforms its supervised counterpart when fine-tuned with only 2% of labeled data
Our work led to a publication and an oral presentation at INTERSPEECH 2022 (one of the top conferences in the field)