Research Interests
My research focuses on the interoperability of technical systems using natural language processing (NLP) methods. The objective is to establish natural language as a wire format to enable communication between different machines. Large Language Models (LLMs) play a central role in this approach: they act as a translation instance, interpreting and transforming machine communication by analyzing technical documentation and incorporating the respective context.
Topics for Theses
Some exciting research questions in this field could include the following:
- To what extent can the technical context improve the translation quality of LLMs in machine communication? How can technical documentation, standards, or specific use cases be leveraged to systematically enhance the translation performance of Large Language Models?
- How efficient is natural language compared to traditional binary formats for machine communication?
- What challenges arise when using Large Language Models for machine translation?
If you are interested in this topic, I would be happy to discuss possible approaches and specific questions further in a personal conversation.
Publications (3)
2025 (1)
-
Navigating the Security Challenges of LLMs: Positioning Target Defenses and Identifying Research GapsIn: Proceedings of the 11th International Conference on Information Systems Security and Privacy (ICISSP 2025)DOI: TBD
2024 (1)
-
Towards Interoperability of APIs - an LLM-based approachIn: Middleware '24: Proceedings of the 25th International Middleware Conference: Demos, Posters and Doctoral Symposium
2021 (1)
-
C-Test Collector: A Proficiency Testing Application to Collect Training Data for C-TestsIn: Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications