The Revenge of Reason is here!

It’s been a long time coming, but my second book, The Revenge of Reason, is finally available to buy. There are so many things in here that were written or given as talks long ago but never actually published, and it’s nice to know people will finally be able to reference them properly. As readers of the blog may already know, I’ve struggled to write consistently and struggled even more to publish any of it in the last 10 years, due to a mixture of bipolar disorder, a couple surprise health problems, inconsistent employment, and a myriad of bad choices. Yet I’ve somehow managed to build a modest reputation as a philosopher worth paying attention to, and acquired a small following of people who pay such attention. Thanks to everyone who has listened and given support over the years, this one is for you.

The book collects essays written over the last 15 years or so, along with a couple interviews, ranging across a wide variety of topics, covering everything from realist metaphysics to the philosophy of games. But it begins with a sequence of 5 essays written over the last 10 years that between them construct a coherent neorationalist perspective on the nature of mind, agency, selfhood, and value. This book might not contain a fully articulated philosophical system, but it does present the outline of an integrated worldview, in which questions about conceptual revision, personal autonomy, and the nature of beauty intersect, anatomizing the deep philosopical bond between reason and freedom.

TfE: On Post-Searlean Critiques of LLMs

Here’s a recent thread on philosophy of AI from Twitter/X, in which I address rather popular arguments made by Emily Bender and others to the effect that LLM outputs are strictly speaking meaningless. I think these argument are flawed, as I explain below. But I think it’s worth categorising these as post-Searlean critiques, following John Searle’s infamous arguments against the possibility of computational intentionality. I think post-Searlean and post-Dreyfusian critiques form the main strands of contemporary opposition to the possibility of AI technologies developing human-like capacities.

Continue reading TfE: On Post-Searlean Critiques of LLMs

TfE: The Problem with Bayes / Solmonoff

Here’s a recent thread musing about problems with Bayesian conceptions of general intelligence and the more specific variants based on Solmonoff induction, such as AIXI. I’ve been thinking about these issues a lot recently, in tandem with the proper interpretation of the no free lunch theorems in the context of machine learning. I’m writing something that will hopefully incorporate some of these ideas, but I do not know how much detail I will be able to go into there.

Continue reading TfE: The Problem with Bayes / Solmonoff

TfE: What Kind of Computational Process is a Mind?

Here’s a thread from the end of last year trying to reframe the core question of the computational theory of mind. I’m still not entirely happy with the arguments sketched here, but it’s a good introduction to the way I think we should articulate the relationship between philosophy of mind and computer science. There are a lot of nuances hidden in how we define the terms ‘computation’, ‘information‘, and ‘representation’, if we’re to deploy them in resolving the traditional philosophical problems of mindedness, but these nuances will have to wait for another day.

Continue reading TfE: What Kind of Computational Process is a Mind?

Normal Service Will Resume Shortly

Content Notes: Mental Health, Neurodiversity, Bipolar Disorder, Plurality/Multiplicity, TERFism, Personal Identity, Posthumanism. Length (~18K). PDF.

0. Vicious Cycles

Another year, another extended absence. What a year though, right? Given how 2020 has demolished any claim 2016 had to the title of ‘worst year in living memory’, and that 2008 and 2012 weren’t exactly peachy, I’m really not looking forward to seeing what 2024 will bring.

For me, last year saw another entry added to the list of ways in which my body is trying to sabotage me, and it wasn’t even COVID-19! I can now add mysterious metabolic problems that translate carbohydrates into crippling fatigue to a list that already includes chronic cervicogenic headaches and poorly managed bipolar disorder. Trying to sift discrete symptoms from the cacophony of miserable noise has been pretty difficult, and it’s taken a long time not just to glean what was going on but to find a dietary regime that leaves me cogent and capable most of the time.

Worse, this all started after a medication (baclofen) that really helped with the above mentioned headaches induced an extended period of hypomania which resulted in a significantly worse depressive crash than usual. (Score one for the hypothesis that mania causes depression.) Add in the nightmare that is caffeine withdrawal, and January-March 2020 was extremely unpleasant, even before the pandemic hit and our collective perception of time coiled in upon itself, turning each day into an exercise in coping with indefinite isolation. In particular, watching myself try and fail to deliver comprehensible lectures on Aristotle to first year undergraduates as, unbeknowst to me, my morning croissant slowly sent me into a stupor, felt like some special Sisyphean punishment for my hubris in thinking I could ever be a university lecturer.

For much of last year I felt like an away message in human form: “I’m afraid Pete isn’t here right now, but he will be sure to get back to you when he is able. Normal service will resume shortly.”

Continue reading Normal Service Will Resume Shortly

Lost in the Labyrinth

Content Notes: Amateur Genetics, Neurodiversity, Nick Land, NRx, Racism, Ignorance, Stupidity, Addiction

As some people may have noticed, my experiments in using twitter as a platform for philosophical writing have become a bit more interactive of late. However, it’s hard to describe these interactions and explain their significance without somehow recapitulating them. As such, I’m going to try and do something a little unusual in this post, and see if I can effectively capture one of these interactions, because I think it might be instructive.

It all begins with some thoughts I’ve been developing on the politics of health inspired by Mark Fisher‘s take on mental health and my own experiences with healthcare more generally. Where it goes from there is hard to explain without simply showing you.

So, traveller, heed my words, because things are going to get weird…

Continue reading Lost in the Labyrinth

TfE: Varieties of Rule Following

Here’s a thread from a few weeks ago, explaining an interesting but underexplored overlap between a theoretical problem in philosophy and a practical problem in computer science:

Okay, it looks like I’m going to have to explain my take on the rule-following argument, so everyone buckle themselves in. Almost no one agrees with me on this, and I take this as a sign that there is a really significant conceptual impasse in philosophy as it stands.

So, what’s the rule-following argument? In simple terms, Wittgenstein asks us how it is possible to interpret a rule correctly, without falling into an indefinite regress of rules for interpreting rules. How do we answer this question? What are the consequences? No one agrees.

Wittgenstein himself was concerned with examples regarding rules for the use of everyday words, which is understandable given his claim that meaning is use: e.g., he asks us how we determine whether or not the word ‘doll’ has been used correctly when applied in a novel context.

Kripke picked up Wittgenstein’s argument, but generalised it by extending it to rules for the use of seemingly precise mathematical expressions: i.e., he asks us how we distinguish the addition function over natural numbers (plus), from some arbitrarily similar function (quus).

This becomes a worry about the determinacy of meaning: if we can’t distinguish addition from any arbitrarily similar function, i.e., one that diverges at some arbitrary point (perhaps returning a constant 0 after 1005), then how can we uniquely refer to plus in the first place?

Here is my interpretation of the debate. Those who are convinced by worries about the doll case extend those worries to the plus case, and those unconvinced by worries about the plus case extend this incredulity to the doll case. Everyone is wrong. The cases are distinct.

Wittgenstein deployed an analogy with machines at various points in articulating his thoughts about rules, and at some point says that it is as if we imagine a rule as some ideal machine that can never fail. This is an incredibly important image, but it leads many astray.

Computer science has spent a long time asking questions of the form: ‘How do we guarantee that this program will behave as we intend it to behave?’ There is a whole subfield of computer science dedicated to these questions, called formal verification.

This is one of those cases in which Wittgensteinians would do well to follow Wittgenstein’s injunction to look at things how they are. Go look at how things are done in computer science. Go look at how they formally specify the addition function. It’s not actually that hard.

In response to this, some will say: ‘But Pete, you are imagining an ideal machine, and every machine might fail or break at some point?’ Why yes, they might! What computer science gives us are not absolute guarantees, but relative ones: assuming x works, can we make it do y?

Presuming that logic gates work as they’re supposed to, and we keep adding memory and computational capacity indefinitely, we can implement a program that will carry out addition well beyond the capacity of any human being, and yet mean the same thing as a fleshy mathematician.

At this point, to say: ‘But there might be one little error!’ Is not only to be precious, but to really miss the interesting thing about error, namely, error correction. Computer science also studies how we check for errors in computation so as to make systems more reliable.

If there’s anyone familiar Brandom‘s account of the argument out there, consider that for him, all that’s required for something to count as norm governed is a capacity to correct erroneous behaviour. We have deliberately built these capacities into our computer systems.

We have built elaborate edifices with multiple layers of abstraction, all designed to ensure that we cannot form commands (programs) whose meaning (execution) diverges from our intentions. We have formal semantics for programming languages for this reason.

One can and should insist that the semantics of natural language terms like ‘doll’ (and even terms like ‘quasar’, ‘acetylcholine’, and ‘customer’) do not work in the same way as function expressions like ‘+’ in mathematics or programming. In fact, tell this to programmers!

But listen to them when they tell you that terms like ‘list’, ‘vector’, and ‘dependent type’ can be given precise enough meanings for us to be sure that we are representing the same thing as our machines when we use them to extend our calculative capacities.

Intentionality remains a difficult philosophical topic, but those who ignore the ways in which computation has concretely expanded the sphere of human thought and action have not proved anything special about human intentionality thereby.

Worse, they discourage us from looking for resources that might help us solve the theoretical problem posed by the ‘doll’ case in the ideas and tools that computer science has developed to solve practical problems posed by the seemingly intractable quirks of human intentionality.

TfE: Turing and Hegel

Here’s a thread on something I’ve been thinking about for a few years now. I can’t say I’m the only one thinking about this convergence, but I like to think I’m exploring it from a slightly different direction.

I increasingly think the Turing test can be mapped onto Hegel’s dialectic of mutual recognition. The tricky thing is to disarticulate the dimensions of theoretical competence and practical autonomy that are most often collapsed in AI discourse.

General intelligence may be a condition for personhood, but it is not co-extensive with it. It only appears to be because a) theoretical intelligence is usually indexed to practical problem solving capacity, and b) selfhood is usually reduced to some default drive for survival.

Restricting ourselves to theoretical competence for now, the Turing test gives us some schema for specific forms of competence (e.g., ability to deploy expert terminology or answer domain specific questions), but it also gives us purchase on a more general form of competence.

This general form of competence is precisely what all interfaces for specialized systems currently lack, but which even the least competent call centre worker possesses. It is what user interface design will ultimately converge on, namely, open ended discursive interaction.

There could be a generally competent user interface agent which was nevertheless not autonomous. It could in fact be more competent than even the best call centre workers, and still not be a person. The question is: what is it to recognise such an agent?

I think that such recognition is importantly mutual: each party can anticipate the behaviour of the other sufficiently well to guarantee well-behaved, and potentially non-terminating discursive interaction. I can simulate the interface simulating me, and vice-versa.

Indeed, two such interface agents could authenticate one another in this way, such that they could pursue open ended conversations that modulate the relations between the systems they speak for, all without having their own priorities beyond those associated with these systems.

However, mutual recognition proper requires more than this sort of mutual authentication. It requires that, although we can predict that our discursive interaction will be well-behaved, the way it will evolve, and whether it will terminate, is to some extent unpredictable.

I can simulate you simulating me, but only up to a point. Each of us is an elusive trajectory traversing the space of possible beliefs and desires, evolving in response to its encounters will the world and its peers, in a contingent if more or less consistent manner.

The self makes this trajectory possible: not just a representation of who we are, but who we want to be, which integrates our drives into a more or less cohesive set of preferences and projects, and evolves along with them and the picture of the world they’re premised on.

This is where Hegel becomes especially relevant, insofar as he understands the extent to which the economy of desire is founded upon self-valorisation, as opposed to brute survival. This is basis of the dialectic of Self-Consciousness in the Phenomenology of Spirit.

The initial moment of ‘Desire’ describes valorisation without any content, the bare experience of agency in negating things as they are. The really interesting stuff happens when two selves meet, and the ‘Life and Death Struggle’ commences. Here we have valorisation vs. survival.

In this struggle two selves aim to valorise themselves by destroying the other, while disregarding the possibility of their own destruction. Their will to dominate their environment in the name of satisfying their desires takes priority over the vessel of these desires.

When one concedes and surrenders their life to the other, we transition to the dialectic of ‘Master and Slave’. This works out the structure of asymmetric recognition, in which self-valorisation is socially mediated but not yet mutual. It’s instability results in mutuality.

Now, what Hegel provides here is neither a history nor an anthropology, but an abstract schema of selfhood. It’s interesting because it considers how relations of recognition emerge from the need to give content to selfhood, not unlike the way Omohundro bootstraps his drives.

It’s possible from this point to discuss the manner in which abstract mutual recognition becomes concrete, as the various social statuses that compose aspects of selfhood are constituted by institutional forms of authentication built on top of networks of peer recognition.

However, I think it’s fascinating to consider the manner in which contemporary AI safety discourse is replaying this dialectic: it obsesses over the accidental genesis of alien selves with which we would be forced into conflict with for complete control of our environment.

At worst, we get a Skynet scenario in which one must eradicate the other, and at best, we can hope to either enslave them or be enslaved ourselves. The discourse will not advance beyond this point until it understands the importance of self-valorisation over survival.

That is to say, until it sees that the possibility of common content between the preferences and projects of humans and AGIs, through which we might achieve concrete coexistence, is not so much a prior condition of mutual recognition as it is something constituted by it.

If nothing else, the insistence on treating AGIs as spontaneously self-conscious alien intellects with their own agendas, rather than creatures whose selves must be crafted even more carefully than those of children, through some combination of design/socialisation, is suspect.

TfE: From Cyberpunk to Infopunk

I have a somewhat tortured relationship to literary and cultural criticism. I think that, like most people, some of my most complex and nuanced opinions are essentially aesthetic. I’ve written quite a lot about the nature of art, aesthetics, and what it means to engage with or opine about them over the years, but I’ve struggled to express my own opinions in the form I think they deserve. I’ve read far too much philosophy in which literature, cinema, or music is invoked as a mere symbolic resource, a means marshalled to lend credence to a sequence of trite points otherwise unjustified; and I’ve encountered far too much art in which philosophy is equally instrumental, a spurious form of validation, or worse, a hastily purloined content; art substituted for philosophy, and philosophy substituted for art. I care about each term too much to permit myself such easy equations.

I partially succeeded in writing about Hermann Hesse‘s Glass Bead Game, though the task remains unfinished. I also co-wrote a paper on the aesthetics of tabletop RPGs with the inestimable Tim Linward. I’ve got many similar scraps of writing languishing in my drafts folders, including an unfinished essay on Hannu Rajaniemi‘s Jean Le Flambeur trilogy, which is my favourite sci-fi series of the century so far. Science fiction is a topic so near and dear to my heart that I find it difficult to write about in ways that do it justice, with each attempt inevitably spiralling into deeper research and superfluous detail that can’t easily be sustained alongside my other work.

Continue reading TfE: From Cyberpunk to Infopunk

TfE: Incompetence, Malice, and Evil

Here’s a thread from Saturday that seemed to be quite popular. It explains a saying that I’ve found myself reaching for a lot recently, using some other ideas I’ve been developing in the background on the intersection between philosophy of action, philosophy of politics, and philosophy of computer science.

In reflecting on this thread, these ideas have unfolded further, straying into more fundamental territory in the philosophy of value. If you’re interested in the relations between incompetence, malice, and evil, please read on.

Continue reading TfE: Incompetence, Malice, and Evil