Blog
Notes from the bench. Mostly LLMs, interpretability, and control.
- Our ICML 2026 paper
- Reading constraints inside a controller's language model
- Evaluating explanations: why LIME and SHAP are not enough
- Symbolic constraints, optimisation, and what LLMs miss
- From causal discovery to causal reasoning
- Domain knowledge graphs as scaffolds for LLM reasoning
- Mechanistic interpretability for non-NLP people, a primer
- Why LLMs need to explain themselves: a control-systems perspective
-
How we made greenhouse controllers explain themselves
A walk through the two papers our group published in 2025.