About

I’m currently a Master’s student in Artificial Intelligence at Queen Mary University of London, studying as a Chevening Scholar.

My current research explores how large language models can reason about sound. I’m interested in using symbolic representations to help machines move beyond hearing and toward understanding, especially in systems that bridge audio and language in interpretable ways.

Before moving to London, I worked for two years as a software engineer, building backend systems and REST APIs for AI-driven products. I focused on designing clean architectures, developing secure and optimized REST APIs, and continuously improving the codebase through refactoring and performance tuning.

Prior to that, I earned my Bachelor’s degree in Computer Engineering, where I focused on medical AI. My work involved developing deep learning models for breast cancer detection using ultrasound images and COVID-19 diagnosis from chest X-rays. I also explored imagined speech classification through EEG signal analysis, aiming to decode intended speech from brain activity. During this time, I also worked as a Teaching Assistant for the Data Structures course, where I helped students understand core concepts, guided them with project implementation, and marked their exams and final projects.

Beyond research, my love for music runs deep, and that’s what draws me to the space where sound meets code. I’ve been working on a personal project to build an AI DJ, not just a machine that mixes tracks, but one that can explain the reasoning behind each transition. The vision is to create a system that demystifies the art of DJing, making its creative flow more transparent for both musicians and curious listeners.

🗞️ News