I am a PhD candidate in Computer Science at Technical University of Munich with Prof. Nassir Navab at the CAMP chair. My research in Human-Computer Interaction focuses on multisensory interactions in augmented reality. I am specifically interested in audiovisual perception and sonic interaction design for healthcare applications. I have published work at ACM CHI, IEEE ISMAR, IEEE TVCG, and MICCAI.
Currently, I am a visiting student at the Augmented Perception Lab with Prof. David Lindlbauer at the Human-Computer Interaction Institute at Carnegie Mellon University. I previously received a MS in Design from Stanford University where I was advised by Prof. Sean Follmer.
An audio mosaic of bird songs that visualizes and sonifies the AI model's parameter k, a measure of similarity retrieval.
A human-AI interaction based on emotions and colors rather than words.
An expressive tool for contemplation - creating ephemeral drawings using sound
This audio visualizer conveys waveform and time domain of a given audio input in real-time.
A physical and virtual reality exhibition piece on the human gut microbiome inviting the visitor to transcend the limitations of our human senses and envision other simultaneous realities.
An audio sequencer that lets you discover the soundscape of space.