Ramesh Arvind Naagarajan
Home Research Publications Blog CV Contact

Blog

Notes from the bench. Mostly LLMs, interpretability, and control.

  • May 5, 2026

    Our ICML 2026 paper
  • April 15, 2026

    Reading constraints inside a controller's language model
  • December 19, 2025

    Evaluating explanations: why LIME and SHAP are not enough
  • November 28, 2025

    Symbolic constraints, optimisation, and what LLMs miss
  • November 7, 2025

    From causal discovery to causal reasoning
  • October 17, 2025

    Domain knowledge graphs as scaffolds for LLM reasoning
  • September 26, 2025

    Mechanistic interpretability for non-NLP people, a primer
  • September 5, 2025

    Why LLMs need to explain themselves: a control-systems perspective
  • August 15, 2025

    How we made greenhouse controllers explain themselves

    A walk through the two papers our group published in 2025.

© 2025 Ramesh Arvind Naagarajan · Chemnitz, Germany

GitHub · LinkedIn · Scholar · ResearchGate · ORCID