I completed my PhD at Stuttgart University’s Institute for Artificial Intelligence, where I conducted research under the supervision of Prof. Dr. Steffen Staab. My work focuses on human-computer interaction and machine learning, particularly multimodal interaction techniques that combine eye tracking with touch or non-lexical voice input.
During my time at the university, I contributed to the EXIST-funded project Semanux, which aims to make digital interaction more inclusive by enabling people with disabilities to control computers using their individual capabilities.
My research has been published at leading conferences such as ACM CHI, ACM ETRA, and INTERSPEECH, spanning novel methods for eye typing as well as machine-learning approaches to classifying non-verbal voice expressions, including a 2023 INTERSPEECH paper on deep-learning methods for recognizing humming and other non-lexical vocal inputs. I also taught courses in Human-Computer Interaction, Information Retrieval, and Machine Learning, and supervised student theses.
Before and after my doctoral studies, I worked in industry, including roles at Bliksund in Norway and Union Betriebs-GmbH in Bonn, contributing to a range of IT projects such as a rules repository system for the CDU and the personal homepage of Angela Merkel. Currently, I apply my background in machine learning and multimodal interaction to my work at Alfacube and to Tiltility, a research-focused platform for camera-based interaction.
PhD in Human-Computer Interactions and AI, 2025
University of Stuttgart
MSc in Web Science, 2019
University of Koblenz-Landau
Following the acquisition of Semanux, I am responsible for the development and integration of AI systems across the Alfa corporate group.
• LLM Evaluation (OwlMcGavel): Project owner for a custom evaluation framework for RAG pipelines, automating testing for answer relevancy, faithfulness, and hallucination detection.
• Computer Vision: Trained and deployed a custom background removal model optimized for the alfaview video conferencing ecosystem.
• Model Engineering: Developed a custom model converter (PyTorch to TensorFlow/TFLite to ONNX) to streamline cross-platform deployment and high-performance execution.
Doctoral Researcher | Human-Computer Interaction & AI
Researcher in Human-Computer Interaction and Artificial Intelligence, with expertise in Multimodal Signal Processing, Digital Accessibility, and On-device ML.
Key Achievements:
• Grant Writing: Authored an accepted DFG proposal for the University of Stuttgart AI Institute shortly after completing my PhD.
• Speech & AI (Interspeech): Developed state-of-the-art ML models for recognizing non-verbal vocalizations (e.g., laughter, humming), advancing the field of Natural Language Interaction.
• Gaze Interaction (ACM CHI): Engineered pioneering eye-typing and gaze-based interfaces to improve digital accessibility for users with motor impairments.
• Mentorship: Supervise student theses and lead tutorials in Machine Learning, Information Retrieval, and HCI.