On the accumulation of architectural choices in ML systems — and how the sum of individually reasonable decisions can produce unreasonable outcomes.
On the gap between statistical competence and genuine comprehension in language models — and whether that gap can be closed, or only narrowed.
Reading notes on the ILP literature: FOIL, PROGOL, Metagol, FOLD-RM, and the persistent challenge of scalability. With observations on why ILP keeps returning to the foreground despite the neural wave.
A technical case for finite-state transducers as the connective tissue between unstructured text and structured neural reasoning — with implementation notes on composition, weight propagation, and runtime performance.
Reading Fanon's phenomenology of recognition alongside contemporary debates about machine understanding. What does it mean to be recognised as a knowing subject, and can machines be?
Heidegger's critique of Cartesian cognitivism and its unexpected relevance to debates about embodiment, situatedness, and grounding in contemporary machine learning.