CV | Lukas Galke Poech

CV

General Information

Name: Lukas Paul Achatius Galke Poech

Address: Campusvej 55, 5230 Odense, Denmark

Office: Ø12-509b-2

Experience

Current position (since 2024): Assistant Professor, University of Southern Denmark (SDU), Department of Mathematics and Computer Science (IMADA), Section for Data Science and Statistics (DSS), Centre for Machine Learning (C4ML)

2022-2024: Postdoctoral Researcher, Max Planck Institute for Psycholinguistics

2017-2022: Doctoral Researcher, Kiel University & ZBW

Education

2022: PhD in Computer Science, Kiel University, Germany

2017: M.Sc. in Computer Science, Kiel University, Germany

2013: BSc in Computer Science, Kiel University, Germany

Grants and Major Projects

MIST: Scalable Mechanistic Interpretability for Safe and Trustworthy LLM Agents (2026–2031) Novo Nordisk Foundation, Principal Investigator, Funding for 2 PhD + 1 Postdoc position to develop interpretability methods for understanding and controlling LLM agents.

Sustainable Language Modeling through Quantization-Aware Continual Pre-training (2025) EuroHPC JU AI and Data Intensive Applications, Principal Investigator, 200,000 GPU hours

Danish Foundation Models (2025–2029) Danish Government Initiative, Work Package Co-Lead (Evaluation), co-supervising 5 PhD students

Selected publications

  • Mogens From, Jacob Nielsen, Lukas Galke, and Peter Schneider-Kamp (2026). DeToNATION: Decoupled Torch Network-Aware Training on Interlinked Online Nodes. AAAI.
  • Danial Namazifard and Lukas Galke (2025). Isolating Culture Neurons in Multilingual Large Language Models. To appear in: AACL-IJCNLP Findings.
  • Richard Šléher, William Brach, Tibor Sloboda, Kristián Košťál, and Lukas Galke (2025). Guarded Query Routing for Large Language Models. ECAI.
  • Jacob Nielsen, Peter Schneider-Kamp, and Lukas Galke (2025). Continual Quantization-Aware Pre-Training: When to transition from 16-bit to 1.58-bit pre-training for BitNet language models? ACL Findings.
  • Lukas Galke, Yoav Ram, and Limor Raviv (2024) Deep neural networks and humans both benefit from compositional language structure. Nature Communications 15:10816.

Selected invited talks

  • Evaluating Large and Multilingual Language Models through Citizen Science. AI, Citizen Science, and MedTech – An Exploration, 2025, May 13, SDU, Odense, Denmark.
  • Emergent communication and learning pressures in language models. Workshop on Using Artificial Neural Networks for Studying Human Language Learning and Processing, 2024, June 10, University of Amsterdam.
  • What makes a language easy to deep-learn? Computational Linguistics Seminar, 2023, May 16, University of Amsterdam, Netherlands.

Teaching experience

  • AI509: Natural Language Processing – Fall 2025, SDU (main lecturer)
  • AI508: Computer Vision – Fall 2025, SDU (co-lecturer)
  • DSK809: Deep Learning – Fall 2025, SDU (responsible)
  • AI506: Advanced Machine Learning – Spring 2025, SDU (main lecturer)
  • DM873/DS809: Deep Learning – Fall 2024, SDU (main lecturer)

Academic service

  • Program Chair: ICNLSP 2025
  • Program Committee: ACL, EMNLP, ICLR, ICML, AAAI, ECAI, …
  • Journal Reviewer: Nature Human Behavior, Nature Communications, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Knowledge and Data Engineering, Neural Networks, Pattern Recognition, Journal of Artificial Intelligence Research (JAIR), …

Contact: lukas 'at' lpag.de
Design: Adapted from Diane Mounter.
Privacy: No personal data, no cookies.