Here is an interview I did with Anthony Morgan, editor of *The Philosopher* in 2020, titled ‘Artificial Bodies and the Promise of Abstraction‘ for its special issue on ‘Bodies’. The interview is principally concerned with pulling together an overview of what might be called ‘the embodiment paradigm’ in philosophy, which is composed of a bunch of distinct yet overlapping strands responding to certain explanatory, normative, and metaphysical problems. I am quite critical of this paradigm, but it’s important to map it in order to articulate my critique, in order to acknowledge its positive contributions and distinguish them from its occasionally regressive impulses.

The end of the interview says something more about my own computational philosophy of mind, but there was a more technical section on the ideas I’ve been drawing from computer science that had to be cut for reasons of space and audience. I’ve included that section below. It should be read as if it were inserted just before the final paragraph of the interview.

**To close, can you offer any speculations about where you see this leading?**

Earlier on, I noted that what distinguishes Descartes from Plato is his conviction that the sensible world can be accurately represented by mathematical models. But really, this distinguishes him from the whole tradition that preceded him, because it establishes a distinction between *representation* and *resemblance*. For the scholastics, the world impresses itself upon us, as a seal upon wax. If we understand something, then it must be because the impressions left upon our minds resemble it. And so, for two different people to understand the same thing, their impressions must resemble one another. This is precisely what abstraction promises to overcome: it enables us to think the same thoughts about the world, even when the ways we experience, engage, and enjoy it bare no obvious resemblance to one another. The thing to grasp about computer science is that it has spent decades developing formal frameworks for *guaranteeing* such promises. How do we ensure that a piece of code is *interpreted* in the same way by different machines, despite underlying variations in the implementation of software libraries, operating systems, and computational architecture? How do we know a calculation will produce the same *result?* Or that a simulation will display the same *behaviour?* Pace Searle, there are methods here that have little to do with whether or not symbols *seem* meaningful.

There are limits to what I can discuss here, but I think these formal tools might let us renew the Platonic strategy for understanding thought as such: we can begin by looking at *mathematical cognition* from a computational perspective, and then gain purchase upon *empirical cognition* by articulating the opposition between the two (a distinctly Kantian dualism).

Computer science has already made a significant contribution to questions concerning the nature of mathematical *intentionality*, such as what it means for different symbols (e.g., variables) to stand for the same mathematical object (e.g., a pair of primes). Type systems solve a range of practical programming problems, such as ensuring a program that operates on whole numbers cannot accidentally be handed a fractional number. Each data type determines a range of values (e.g., positive integers: 1, 2, 3…) and a framework for proving that two values are identical (e.g., sum(1, 2) = 3)). In practice, most problems are solved by *concrete* data types, which are constrained by the way the relevant data structures are implemented (e.g., integers restricted to 32bit representations); but we can also work with *abstract* data types, which describe mathematical structures without reference to implementation (e.g. integers defined by constructors: 0 and succ()). This is essentially the difference between choosing a *numeral syntax* (e.g., Arabic or Roman) and grasping their *mathematical meaning* (e.g., Peano axioms). An ambitious program known as “Homotopy Type Theory” promises to extend such abstraction even further, providing something like a unified theory of mathematical types. Details aside, these ideas might offer us insight into deeper questions about the nature of *identity* and *aboutness* as such.

But what is the key difference that let’s us pass from mathematical to empirical intentionality? Could there something like a unified theory of “empirical types”, and if so, what would it look like? Husserlian phenomenology posits that we always see objects as belonging to some kind (i.e., we see the tree *as* a tree), so there is at least some precedent here. To my mind, the key difference is interaction: we encounter empirical objects not as neat units of *definite structure *(data), but as messy bundles of *open-ended behaviour *(co-data). But to make use of this we must understand interaction in a sufficiently abstract manner. Just as I argued above that we mustn’t identify immersion with incarnation, we must be careful not to identify interaction with immersion. Environmental interaction has the same underlying logic, regardless of whether it is immediately lived (e.g., climbing a tree), or carefully mediated by an assortment of experimental apparatuses and scientific theories (e.g., splicing its genes). Adjusting one’s actions when sensorimotor expectations are *violated* is not different in kind from revising one’s theories when experimental hypotheses are *refuted*. At the end of the day, both are forms of *cybernetic feedback*. The question thus becomes: how could “empirical types” bundle input/output streams into objects seen as sources of such feedback?