essay March 2023

Being-in-the-World and the Situated Machine

Heidegger's critique of Cartesian cognitivism and its unexpected relevance to debates about embodiment, situatedness, and grounding in contemporary machine learning.


Martin Heidegger’s critique of the Cartesian picture of mind has been absorbed into cognitive science and AI research in various ways over the past four decades — through embodied cognition, through situated robotics, through the extended mind thesis. It has been less thoroughly absorbed into the theory of language models, which remain, structurally, very Cartesian: a reasoning engine operating on symbolic representations, detached from the world those representations are supposed to be about.

The critique Heidegger develops in Being and Time targets the primacy of the subject-object relation in epistemology. The Cartesian picture begins with a knowing subject confronting an external world and asks how the subject can gain knowledge of that world. Heidegger’s phenomenological analysis argues that this picture misrepresents the structure of human existence. We are not primarily subjects confronting objects; we are beings-in-the-world, already embedded in practical contexts that give things their meaning before any explicit theorising begins.

The Ready-to-Hand and the Present-at-Hand

The distinction between ready-to-hand (Zuhandenheit) and present-at-hand (Vorhandenheit) is the key move. Tools, in the normal course of use, do not show up as objects to be scrutinised. The hammer is ready-to-hand: it recedes into the background of the activity, defined by its place in the web of practical references (hammers are for nails, nails are for joining, joining is for building…). The hammer becomes present-at-hand — an object, something with properties — when it breaks, or when someone unfamiliar with hammers encounters one for the first time.

The epistemological implication is that understanding something — really understanding it, not just possessing true beliefs about it — requires having it be ready-to-hand in some practical activity. Detached theoretical knowledge is derivative of this more fundamental practical engagement.

The Grounding Problem Revisited

This matters for machine learning in the following way. The grounding problem for language models is usually stated as: how do the tokens in the model’s vocabulary connect to the things in the world they refer to? The standard answer is either: through the statistical regularities in text that encode correlations with world-states; or through multimodal training that connects linguistic representations to perceptual ones.

Heidegger’s analysis suggests both answers are incomplete in the same way. Neither statistical co-occurrence nor perceptual correlation is the same as practical engagement. What it means for a representation to be grounded is not that it correlates with the right inputs, but that it is embedded in a structure of practical use that determines what counts as correct application.

I am not sure what a Heideggerian grounding solution for language models would look like. Perhaps something like reinforcement learning from human feedback is gesturing toward the right territory — models learning through action and consequence rather than passive observation — but this remains speculative. What the analysis does is make the grounding problem look harder than the standard formulation suggests, and point toward the practical and relational dimensions of meaning that purely representational accounts tend to miss.