TfE: What Kind of Computational Process is a Mind?

Here’s a thread from the end of last year trying to reframe the core question of the computational theory of mind. I’m still not entirely happy with the arguments sketched here, but it’s a good introduction to the way I think we should articulate the relationship between philosophy of mind and computer science. There are a lot of nuances hidden in how we define the terms ‘computation’, ‘information‘, and ‘representation’, if we’re to deploy them in resolving the traditional philosophical problems of mindedness, but these nuances will have to wait for another day.


There are many complaints made about classical computational theory of mind (CCTM), but few of them come from the side of computer science. However, in my view, the biggest problem with first and second generation computationalists is a too narrow view of what computation is.

Consider this old chestnut: “Gödel shows us that the mind cannot be a computer, because we can intuit mathematical truths that cannot be deduced from a given set of axioms!”

The correct response to this is: “Why in the ever-loving fuck would you think that the brain qua computational process could be modelled by a process of deduction in a fixed proof system with fixed premises?”

What this question reveals about those who ask it and those who entertain it is that they don’t really appreciate the relationship between computation and logic. Instead, they have a sort of quasi-Leibnizian folk wisdom about ‘mechanical deduction’.

(If you want an example of this folk wisdom turning up in philosophy, go read Adorno and Horkheimer’s Dialectic of Enlightenment, which contains some real corkers.)

Anyway, here are some reasonable assumptions about any account of the mind as a computational process:

1. It is an ongoing process. It is an online system that takes in input and produces output in a manner that is not guaranteed to terminate, and which for the most part has control mechanisms that prevent it behaving badly (e.g., catastrophic divergence).

Interestingly enough, this means that the information flowing into and out of the mind, forming cybernetic feedback loops mediated by the environment, is not data, but co-data. This is a very technical point, but it has fairly serious philosophical implications.

So much of classical computationalism works by modelling the mind on Turing machines, or worse, straight up computable functions, and so implicitly framing it as something that takes finite input and produces finite output (whose parameters must be defined in advance).

Everyone who treats computation as a matter of symbol manipulation, both pro-CCTM (Fodor) and anti-CCTM (Searle), has framed the issue in a way that leads directly to misunderstanding this fairly simple, and completely crucial point.

2. It is a non-deterministic process. When it comes to the human brain, this is just factually true, but I think a case can be made that this is true of anything worth calling a mind. It is precisely what undermines the Leibnizian Myth of mechanical deduction.

This non-determinism can be conceived in various ways, in terms of exploitation of environmental randomness, or in terms of probabilistic transition systems (e.g., Markov chains). The deep point is that any heuristic that searches a possibility space for a solution needs this.

Some problems are solved by following rules, but others can only be solved by finding rules. Any system that learns, which means everything system that is truly intelligent, requires essentially fallible ways of doing the latter. Indeed, it is an evolving collection of these.

I’ve said it before and I’ll say it again, what Gödel proves is that even mathematics is essentially creative, rather than mechanically deductive. Furthermore, it’s a creativity that in principle cannot be modelled on brute forcing. Why would we model other creativity this way?

Yes, mathematics does involve applying deterministic rules to calculate guaranteed results to well defined problems, but how do you think it finds these rules? Mathematicians search for well-formed proofs in a non-totalisable space of possible syntactic objects.

If we cannot brute force mathematics, why would we think that we could brute force the empirical world? Even if we could, there is not enough time nor enough resources. We are left to heuristically search for better non-deterministic heuristics.

3. It is a system of concurrent interacting subsystems. This is also an obvious fact about the human mind qua computational process, at least insofar as the structure of the brain and the phenomenology of our inner lives are concerned. However, it is the most contentious point.

There’s a good sense in which there is an intrinsic connection between concurrency and non-termination and non-determinism, at least insofar as the interactions with our environment just discussed suggest that we fit into it as an actor fits into a larger system of actors.

However, a skeptic can always argue that any concurrent system could always be simulated on a machine with global state, such as a Turing machine in which not just one’s mind but one’s whole environment had been unfolded onto the tape. Concurrency in practice, not principle.

This is where we get into the conceptual matter of what exactly ‘interactive computation’ is, and whether it is something properly distinct from older non-interactive forms. There’s a pretty vicious debate between Peter Wegner and Scott Aaronson on this point.

It all comes back to Samson Abramsky’s framing of the difference between two different ways of looking at what computation is doing, i.e., whether we’re interested in what is being computed or whether we’re interested in how a process is behaving. This is deeply philosophical.

Abramsky asks us: “What function does the internet compute?”

The proper response to this question is that it is nonsensical. But that means that we cannot simply pose the problems that computational systems solve in terms of those computable functions delimited by the equivalence between recursive functions, Turing machines, lambda calculus, etc.

This becomes incredibly contentious, because any attempt to say that (effective) computation as such isn’t defined by this equivalence class can so easily be misinterpreted as claiming that you are tacitly proposing some model of hypercomputation.

The truth is rather that, no matter how much we may use the mathematical scaffolding of computable functions to articulate our solutions to certain problems, this does not mean that those problems can themselves be defined in such mathematical terms.

Problems posed by an empirical environment into which we are thrown as finite creatures, and forced to evolve solutions in more or less systematic ways, no matter how complex, are not mathematically defined, even if they can always be mathematically modelled and analysed.

Solutions to such problems cannot be verified, they can only be tested. This is the real conceptual gulf between traditional programming and machine learning: not the ways in which solutions are produced, but the ways in which the problems are defined. The latter obviates specification.

For any philosophers still following, this duality, between verification and testing, is Kant’s duality between mathematical and empirical concepts. If one reads ‘testing’ as ‘falsification’, one can throw in Popper’s conception of empirical science into the mix.

My personal view is that this wider way of considering the nature of the problems computational processes can solve is just what used to be called cybernetics, and that the logic of observation and action is essentially the same as the logic of experimental science.

Adjusting one’s actions when sensorimotor expectations are violated is not different in kind from revising one’s theories when experimental hypotheses are refuted. At the end of the day, both are forms of cybernetic feedback.

So, where do these initial claims about what type of computational process a mind is lead us, philosophically speaking?

Here’s another philosophical question, reframed in this context. If not all minds necessarily have selves, but some certainly do, and these constitute control structures guiding the interactive behaviour of the overall cybernetic system, what kind of process is a self?

Is it guaranteed to terminate? Will it catastrophically diverge in a manner that is termination in all but name? Will it fall into a loop that can only be broken by environmental input? Or, is well-behaved but interesting non-termination possible?

What even would it mean for there to be well-behaved and interesting non-terminating behaviour in this case?

Isn’t that just the question of whether life can have purpose that isn’t a sort of consolation prize consequent on finitude? Are there sources of meaning other than our inexorable mortality?

I’d wager yes. But any way of answering this question properly is going to have reckon with this framing, for any less is to compromise with those enamoured of the mysteries of our meat substrate.

TfE: The History of Metaphysics and Ontology

If there’s a subject I’m officially an expert on, it’s what you might call the methodology of metaphysics: the question of what metaphysics is and how to go about it. I wrote my thesis on the question of Being in Heidegger’s work, trying to disentangle his critique of the metaphysical tradition from the specifics of his phenomenological project and the way it changes between his early and late work. I then wrote a book on how not to do metaphysics, focusing on a specific contemporary example but unfolding it into a set of broader considerations and reflections.

If there’s one thing I’ve learned over the years, it’s that there’s no consistent usage of the terms ‘metaphysics’ and ‘ontology’ internal to philosophy let alone in the disciplines downstream from it, and that, though the Analytic/Continental divide plays a role in this, it’s a deeper problem. This causes a lot of confusion for students and academics alike, and I towards the end of last year I took to Twitter to help clear up this confusion as best I can. This thread proved very popular, so here’s an edited version that’s more easily linkable.

Continue reading TfE: The History of Metaphysics and Ontology

Lost in the Labyrinth

Content Notes: Amateur Genetics, Neurodiversity, Nick Land, NRx, Racism, Ignorance, Stupidity, Addiction

As some people may have noticed, my experiments in using twitter as a platform for philosophical writing have become a bit more interactive of late. However, it’s hard to describe these interactions and explain their significance without somehow recapitulating them. As such, I’m going to try and do something a little unusual in this post, and see if I can effectively capture one of these interactions, because I think it might be instructive.

It all begins with some thoughts I’ve been developing on the politics of health inspired by Mark Fisher‘s take on mental health and my own experiences with healthcare more generally. Where it goes from there is hard to explain without simply showing you.

So, traveller, heed my words, because things are going to get weird…

Continue reading Lost in the Labyrinth

TfE: Turing and Hegel

Here’s a thread on something I’ve been thinking about for a few years now. I can’t say I’m the only one thinking about this convergence, but I like to think I’m exploring it from a slightly different direction.

I increasingly think the Turing test can be mapped onto Hegel’s dialectic of mutual recognition. The tricky thing is to disarticulate the dimensions of theoretical competence and practical autonomy that are most often collapsed in AI discourse.

General intelligence may be a condition for personhood, but it is not co-extensive with it. It only appears to be because a) theoretical intelligence is usually indexed to practical problem solving capacity, and b) selfhood is usually reduced to some default drive for survival.

Restricting ourselves to theoretical competence for now, the Turing test gives us some schema for specific forms of competence (e.g., ability to deploy expert terminology or answer domain specific questions), but it also gives us purchase on a more general form of competence.

This general form of competence is precisely what all interfaces for specialized systems currently lack, but which even the least competent call centre worker possesses. It is what user interface design will ultimately converge on, namely, open ended discursive interaction.

There could be a generally competent user interface agent which was nevertheless not autonomous. It could in fact be more competent than even the best call centre workers, and still not be a person. The question is: what is it to recognise such an agent?

I think that such recognition is importantly mutual: each party can anticipate the behaviour of the other sufficiently well to guarantee well-behaved, and potentially non-terminating discursive interaction. I can simulate the interface simulating me, and vice-versa.

Indeed, two such interface agents could authenticate one another in this way, such that they could pursue open ended conversations that modulate the relations between the systems they speak for, all without having their own priorities beyond those associated with these systems.

However, mutual recognition proper requires more than this sort of mutual authentication. It requires that, although we can predict that our discursive interaction will be well-behaved, the way it will evolve, and whether it will terminate, is to some extent unpredictable.

I can simulate you simulating me, but only up to a point. Each of us is an elusive trajectory traversing the space of possible beliefs and desires, evolving in response to its encounters will the world and its peers, in a contingent if more or less consistent manner.

The self makes this trajectory possible: not just a representation of who we are, but who we want to be, which integrates our drives into a more or less cohesive set of preferences and projects, and evolves along with them and the picture of the world they’re premised on.

This is where Hegel becomes especially relevant, insofar as he understands the extent to which the economy of desire is founded upon self-valorisation, as opposed to brute survival. This is basis of the dialectic of Self-Consciousness in the Phenomenology of Spirit.

The initial moment of ‘Desire’ describes valorisation without any content, the bare experience of agency in negating things as they are. The really interesting stuff happens when two selves meet, and the ‘Life and Death Struggle’ commences. Here we have valorisation vs. survival.

In this struggle two selves aim to valorise themselves by destroying the other, while disregarding the possibility of their own destruction. Their will to dominate their environment in the name of satisfying their desires takes priority over the vessel of these desires.

When one concedes and surrenders their life to the other, we transition to the dialectic of ‘Master and Slave’. This works out the structure of asymmetric recognition, in which self-valorisation is socially mediated but not yet mutual. It’s instability results in mutuality.

Now, what Hegel provides here is neither a history nor an anthropology, but an abstract schema of selfhood. It’s interesting because it considers how relations of recognition emerge from the need to give content to selfhood, not unlike the way Omohundro bootstraps his drives.

It’s possible from this point to discuss the manner in which abstract mutual recognition becomes concrete, as the various social statuses that compose aspects of selfhood are constituted by institutional forms of authentication built on top of networks of peer recognition.

However, I think it’s fascinating to consider the manner in which contemporary AI safety discourse is replaying this dialectic: it obsesses over the accidental genesis of alien selves with which we would be forced into conflict with for complete control of our environment.

At worst, we get a Skynet scenario in which one must eradicate the other, and at best, we can hope to either enslave them or be enslaved ourselves. The discourse will not advance beyond this point until it understands the importance of self-valorisation over survival.

That is to say, until it sees that the possibility of common content between the preferences and projects of humans and AGIs, through which we might achieve concrete coexistence, is not so much a prior condition of mutual recognition as it is something constituted by it.

If nothing else, the insistence on treating AGIs as spontaneously self-conscious alien intellects with their own agendas, rather than creatures whose selves must be crafted even more carefully than those of children, through some combination of design/socialisation, is suspect.

TfE: Incompetence, Malice, and Evil

Here’s a thread from Saturday that seemed to be quite popular. It explains a saying that I’ve found myself reaching for a lot recently, using some other ideas I’ve been developing in the background on the intersection between philosophy of action, philosophy of politics, and philosophy of computer science.

In reflecting on this thread, these ideas have unfolded further, straying into more fundamental territory in the philosophy of value. If you’re interested in the relations between incompetence, malice, and evil, please read on.

Continue reading TfE: Incompetence, Malice, and Evil

TfE: Corrupting the Youth

Here’s a twitter thread from earlier today, articulating some of my thoughts about the philosophy of games in general, and the nature of tabletop roleplaying games more specifically.

Here’s a rather different set of thoughts for this morning. Some may know that one of my many interests is philosophy of games. This is a topic close to my heart, but I also think it a timely one, insofar as games are now culturally hegemonic.

The concept of game cuts across everything from the philosophies of action and mathematics to the philosophies of politics and art. We ignore it at the risk of our own cultural and intellectual irrelevance.

If you want to know more about the history of the concept and my own take on it, check out my ‘What’s in a Game?’ talk.

To be concise: I think that if games are art, then their medium is freedom itself, and that there is a case to be made that RPGs, whether tabletop, LARP, computer based, or some cross-modal mixture thereof, realize this truth most completely. RPGs are experiments in agency.

This isn’t to say that they’re necessarily very good experiments. Computer RPGs have suffered from very obvious constraints for decades, and I’ve played enough dull dice based dungeon crawls to last a lifetime. But I’ve equally experienced heart-breakingly imperfect art.

Tabletop RPGs have given me the sorts of barely expressible, intensely formative, and deeply connected experiences that others hope for and occasionally find in art, literature, and the collective projects of politics and culture. People will no doubt laugh at this fact.

Again, most RPGs aren’t this good, and it is much harder to plan and execute good ones as you and your friends get older. Boardgames, a representational art form in their own right, become much more tempting for their ludic precision and easy self-containment.

But I pine for the days of dice and character sheets, exploring the weirder fringes of inhuman narrative and the familiar shores of the human condition simultaneously. Werecoyotes and Psionics, insatiable curiosity and crippling anxiety, joyous battles and crushing failures.

So, after this personal preamble, here is the philosophical thought I came here to express: RPG systems are procedural frameworks for interactive narrative generation, and they contain engines for simulating worlds.

They are therefore deeply philosophical, because they must contain a metaphysics (narrative/fate) and a theory of personhood (identity/agency/destiny), but they may also contain a logic (GM/PC/NPC interaction), a physics (simulation/means), and an ethics (alignment/ends).

My first encounter with philosophy wasn’t reading Nietzsche, Sartre, or Popper, but reading grimoire-like RPG manuals, searching for the hidden secrets of worlds they contained, many of which I have never visited even in play. What is creation? Why is there suffering? Who are we?

My partner in conceptual crime (@tjohnlinward) likes to say that RPG manuals are tour guides for worlds that don’t exist, but in many ways they’re more like holy texts. Many even have completely explicit and thoroughly fascinating theology.

An RPG system/setting is a universe in which the throne is empty, awaiting a new godhead, or a new pantheon to play the games of divinity. An adventure supplement is like an epic poem, awaiting heroes ready to test their mettle in struggle against the whims of fickle gods.

Narrative is a product, but the process that produces it is a complex, concurrent, and creative interaction between ideas and inspirations; brimming with contingency; some of which may even be embodied in distinct creators and muses. Games are our window into this process.

And that is why games disprove Hegel’s thesis regarding the end of art, precisely by being the most deeply Hegelian of art forms. The world-spirit arrives, no longer Napoleon riding into Jena on horseback, but Gary Gygax corrupting the youth with pens, paper, and polyhedra.

If you want to read more along these lines, check out my ‘Castalian Games’ piece in Glass Bead.

TfE: Sincerity vs. Honesty

I often talk about the virtue of sincerity, and how important it is to me. There’s even a section of my book devoted to disputing Harman’s interpretation of sincerity as authenticity (‘being oneself’) and contrasting it with my own take on sincerity as fidelity (‘meaning what one says’). However, a question William Gillis asked on Facebook gave me a concrete opportunity to articulate my ideas more concisely, by contrasting sincerity with honesty:

Screenshot 2019-10-29 07.33.05.png

Continue reading TfE: Sincerity vs. Honesty

TfE: Immanentizing the Eschaton

Here’s a thread from a little while back in which I outline my critique of the (theological) assumptions implicit in much casual thinking about artificial intelligence, and indeed, intelligence as such.

Another late night thought, this time on Artificial General Intelligence (AGI): if you approach AGI research as if you’re trying to find algorithm to immanentize the eschaton, then you will be either disappointed or deluded.

There are a bunch of tacit assumptions regarding the nature of computation that tend to distort the way we think about what it means to solve certain problems computationally, and thus what it would be to create a computational system that could solve problems more generally.

There are plenty of people who have already pointed out the theological valence of the conclusions reached on the basis of these assumptions (e.g., the singularity, Roko’s Basilisk, etc.); but these criticisms are low hanging fruit, most often picked by casual anti-tech hacks.

Diagnosing the assumptions themselves is much harder. One can point to moments in which they became explicit (e.g., Leibniz, Hilbert, etc.), and thereby either influential, refuted, or both; but it is harder to describe the illusion of coherence that binds them together.

This illusion is essentially related to that which I complained about in my thread about moral logic a few days ago: the idea that there is always an optimal solution to any problem, even if we cannot find it; whereas, in truth, perfectibility is a vanishingly rare thing.

Using the term ‘perfectibility’ makes the connection to theology much clearer, insofar as it is precisely this that forms the analogical bridge between creator and created in the Christian tradition. Divinity is always conceptually liminal, and perfection is a popular limit.

If you’re looking for a reference here, look at the dialectical evolution of the transcendentals (e.g., unum, bonum, verum, etc.) from Augustine and Anselm to Aquinas and Duns Scotus. The universality of perfectible attributes in creation is the key to the singularity of God.

This illusion of universal perfectibility is the theological foundation of the illusion of computational omnipotence.

We have consistently overestimated what computation is capable of throughout history, whether computation was seen as an algorithmic method executed by humans, or a process of automated deduction realised by a machine. The fictional record is crystal clear on this point.

Instead of imagining machines that can do a task better than we can, we imagine machines that can do it in the best possible way. When we ask why, the answer is invariably some variant upon: it is a machine and therefore must be infallible.

This is absurd enough in certain specific cases: what could a ‘best possible poem’ even be? There is no well-ordering of all possible poems, only ever a complex partial order whose rankings unravel as the many purposes of poetry diverge from one another.

However, the deep, and seemingly coherent computational illusion is that there is not just a best solution to every problem, but that there is a best way of finding such bests in every circumstance. This implicitly equates true AGI with the Godhead.

One response to this accusation is to say: ‘Of course, we cannot achieve this meta-optimum, but we can approximate it.’

Compare: ‘We cannot reach the largest prime number, we can still approximate it’

This is how you trade disappointment for delusion.

There are some quite sophisticated mathematical delusions out there. But they are still illusions. There is no way to cheat your way to computational omnipotence. There is nothing but strategy all the way down.

This is not to say that there aren’t better/worse strategies, or that we can’t say some useful and perhaps even universal things about how you tell one from the other. Historically, proofs that we cannot fulfil our deductive ambitions lead to better ambitions and better tools.

The computational illusion, or the true Mythos of Logos, amounts to the idea that one can somehow brute force reality. There is more than a mere analogy here, if you believe Scott Aaronson’s claims about learning and cryptography (I’m inclined to).

It continually surprises me just how many people, including those involved in professional AGI research still approach things in this way. It looks as if, in these cases, the engineering perspective (optimality) has overridden the logical one (incompleteness).

I’ve said it before, and I’ll say it again: you cannot brute force mathematical discovery; there is no algorithm that could progressively search the space of possible theorems. If this does not work in the mathematical world, why would we expect it to work in the physical one?

For additional suggestive material on this and related problems, consider: the problem of induction, Godel’s incompleteness theorems, and the halting problem.

Anyway, to conclude: we will someday make things that are smarter than us in every way, but the intermediate stages involve things smarter than us in some ways. We will not cross this intelligence threshold by merely adding more computing power.

However it happens, it will not be because of an exponential process of self-improvement that we have accidentally stumbled upon. Self-improvement is not homogeneous, or without autocatalytic instabilities. Humans are self-improving systems, and we are clearly not gods.

The Going Price of Power

I’m really enjoying using twitter as a medium in which to do philosophy, because it forces me to make an argument that is organised in chunks, and to ensure that those chunks are more or less well formed and reasonably compressed. It then allows me to capture those thoughts here, and edit, extend, or recompress them. It also lets me use hyperlinks instead of references, which is impressively liberating. What comes out isn’t exactly perfect, but that’s the point: to enable one to make things better and better beyond the suffocating bounds of the optimal. As I have said many times before, I can be concise, but I find it much harder to be brief, but the twitter-blog feedback loop is helping me to work on that. Moreover, it has allowed me to think a bit more about the writing processes of different philosophers throughout history, some of whom are more or less accessible depending on the way they were able to write and permitted to publish.

I feel like Nietzsche and Wittgenstein would have loved twitter, while Leibniz would have preferred the blogosphere, and Plato would have immersed himself in youtube (Socrates: That’s all very well Glaucon, but you have not answered question with which we began: are traps gay?). I think Hegel would have preferred a wiki, and would have been a big contributor to nLab. I would have loved to have been Facebook friends with Simone Weil or Rosa Luxemburg, but I can imagine getting caught in flame wars with Marx and Engels if I stumbled into the wrong end of a BBS (Engels: Do you even sublate, bro?). I desperately wish I could subscribe to Bataille’s tumblr, or browse the comments Baudrillard made on /r/deepfakes before it got banned. I know the world would be infinitely richer for de Beauvoir and Firestone’s Pornhub comments, but we should be thankful we were spared Heidegger and Arendt’s tindr logs. However, I’m pretty convinced that nothing would have stopped Kant from thinking very hard for a couple decades and then publishing all his thoughts in several big books. We’d have gotten a few good thinkpieces from him (Clickbait: ‘This man overturned the authority of tradition using this one weird trick, now monarchies hate him!’), but nothing could have stopped his architectonic momentum.

Continue reading The Going Price of Power

TfE: Mysterianism and Quietism in the Philosophy of Mind

Yesterday, I had an excellent conversation with Peli Grietzer on the subject of qualia and the hard problem of consciousness, and the reasons why some attempts to defuse these problems (e.g., Dennett’s ‘Quining Qualia‘ or his response to Chalmers) leave people who are drawn to the seriousness of these problems cold. I recommend Peli’s work to everyone I know, and even some people I don’t. It’s simultaneously incredibly unique and incredibly timely, trying to unpick issues at the intersection of aesthetics, epistemology, literary theory, and philosophy of mind using the theoretical insights implicit in deep learning systems, particularly auto-encoders.

Peli has come the closest of anyone I know to giving real, mathematical substance to the most ephemeral aspects of human experience: moods, vibes, styles; the implicit gestalts through which we filter the rest of our more explicit and compositional cognitions. Why does being in Berlin have a distinct cognitive texture that’s different from being in London? It’s not any one thing you can put your finger on, but a seamless tapestry of tiny differences that you drink in without needing to reflect on it. So when Peli says that he’s not sure that these most immediate, holistic, and yet nonetheless cognitive features of our experience are being taken seriously, I have no choice but to listen, and to see if there’s another way to articulate my own worries.

This result was probably the best thought experiment I’ve ever managed to articulate, and probably the most concise explanation of my problems with certain styles of thinking in the philosophy of mind. Given that my thoughts here are also in some sense an engagement with Wittgenstein and his legacy, I thought it might be nice to try articulating them in an aphoristic style. Here is the result.

Continue reading TfE: Mysterianism and Quietism in the Philosophy of Mind