About

I am an ML / Research Engineer based in London, working at the intersection of audio, multimodal machine learning, and generative models. I recently completed my MSc in Artificial Intelligence at Queen Mary University of London with Distinction, supported by the Chevening Scholarship.

My research focuses on audio reasoning, representation learning, and LLM-based multimodal understanding, with growing work in generative music and audio systems. My Master’s thesis, SAR-LM: Symbolic Audio Reasoning with Large Language Models, was accepted for an oral presentation at the LLM4MA Workshop (ISMIR 2025). The work is also under review at ICASSP 2026.

Alongside research, I build scalable ML systems, GPU-backed pipelines, and production-ready services using PyTorch, FastAPI, Docker, and AWS. I currently work as an AI Engineer, developing reliable multimodal generation workflows and backend systems for ML-driven applications.

Before moving to London, I spent two years as a software engineer designing REST APIs, optimizing data pipelines, and building distributed backend systems. My background also includes research in deep learning for medical imaging and teaching experience in data structures.


🗞️ News