ML engineer, researcher, independent thinker.
I'm a machine learning engineer and research scientist working at the boundary between structured symbolic reasoning and learned neural representations. Currently at The Network Group, I build knowledge extraction pipelines for high-stakes domains — legal, financial, and operational environments where reliability and explainability are requirements, not nice-to-haves.
My work at The Network Group has focused on deploying ML systems in contexts where the cost of error is high: a false positive in a safety-monitoring system means wasted resources and eroded trust; a false negative can mean missed obligations or overlooked risk. This has shaped my research interests toward hybrid approaches — systems that combine the pattern-recognition capabilities of modern neural networks with the verifiability and compositionality of classical symbolic methods.
My academic background is in linguistics and computer science, and I am currently preparing applications for PhD programmes in neuro-symbolic AI and knowledge representation. I am particularly interested in programmes with strong connections to both the formal and applied sides of the field — where the theoretical questions (what is a representation? what does grounding require?) are taken seriously alongside the engineering challenges.
Experience
2023 – present
ML Engineer
The Network Group · London
Knowledge extraction pipelines for high-stakes legal, financial, and operational domains.
Placeholder year
Role title
Organisation · Location
Short description of responsibilities or focus.
Placeholder year
BSc Linguistics & Computer Science
University name · Location
Intellectual interests
Alongside the technical work, I read and write about continental philosophy — particularly phenomenology and critical theory. I find the questions these traditions raise about mind, meaning, and recognition genuinely relevant to the problems I work on technically, even when the connection is oblique. Fanon on recognition, Heidegger on being-in-the-world, Wittgenstein on rule-following: these are not metaphors for ML problems, but they illuminate what the problems are about in a way that purely technical literature often doesn't.
On the more speculative end of ML theory, I'm interested in the dynamics of training — particularly in what Hurst exponents and long-range dependence in training loss curves might tell us about the structure of learning. This is very much at the "interesting question" stage rather than the "publishable result" stage, but I return to it.
Now →
Building the FOLD-RM knowledge extraction experiments. Drafting PhD applications. Reading Cora Diamond on moral philosophy and considering what it implies about the epistemology of large models. Running.
Currently
ML Engineer, The Network Group
Based in
London, UK
Education
BSc Linguistics & Computer Science
PhD applications in progress
Research interests
Neuro-symbolic AI · Knowledge extraction · ILP · Temporal reasoning