The Revenge of Reason is here!

It’s been a long time coming, but my second book, The Revenge of Reason, is finally available to buy. There are so many things in here that were written or given as talks long ago but never actually published, and it’s nice to know people will finally be able to reference them properly. As readers of the blog may already know, I’ve struggled to write consistently and struggled even more to publish any of it in the last 10 years, due to a mixture of bipolar disorder, a couple surprise health problems, inconsistent employment, and a myriad of bad choices. Yet I’ve somehow managed to build a modest reputation as a philosopher worth paying attention to, and acquired a small following of people who pay such attention. Thanks to everyone who has listened and given support over the years, this one is for you.

The book collects essays written over the last 15 years or so, along with a couple interviews, ranging across a wide variety of topics, covering everything from realist metaphysics to the philosophy of games. But it begins with a sequence of 5 essays written over the last 10 years that between them construct a coherent neorationalist perspective on the nature of mind, agency, selfhood, and value. This book might not contain a fully articulated philosophical system, but it does present the outline of an integrated worldview, in which questions about conceptual revision, personal autonomy, and the nature of beauty intersect, anatomizing the deep philosopical bond between reason and freedom.

TfE: On Post-Searlean Critiques of LLMs

Here’s a recent thread on philosophy of AI from Twitter/X, in which I address rather popular arguments made by Emily Bender and others to the effect that LLM outputs are strictly speaking meaningless. I think these argument are flawed, as I explain below. But I think it’s worth categorising these as post-Searlean critiques, following John Searle’s infamous arguments against the possibility of computational intentionality. I think post-Searlean and post-Dreyfusian critiques form the main strands of contemporary opposition to the possibility of AI technologies developing human-like capacities.

Continue reading TfE: On Post-Searlean Critiques of LLMs

TfE: The Problem with Bayes / Solmonoff

Here’s a recent thread musing about problems with Bayesian conceptions of general intelligence and the more specific variants based on Solmonoff induction, such as AIXI. I’ve been thinking about these issues a lot recently, in tandem with the proper interpretation of the no free lunch theorems in the context of machine learning. I’m writing something that will hopefully incorporate some of these ideas, but I do not know how much detail I will be able to go into there.

Continue reading TfE: The Problem with Bayes / Solmonoff

TfE: What Kind of Computational Process is a Mind?

Here’s a thread from the end of last year trying to reframe the core question of the computational theory of mind. I’m still not entirely happy with the arguments sketched here, but it’s a good introduction to the way I think we should articulate the relationship between philosophy of mind and computer science. There are a lot of nuances hidden in how we define the terms ‘computation’, ‘information‘, and ‘representation’, if we’re to deploy them in resolving the traditional philosophical problems of mindedness, but these nuances will have to wait for another day.

Continue reading TfE: What Kind of Computational Process is a Mind?

TfE: The History of Metaphysics and Ontology

If there’s a subject I’m officially an expert on, it’s what you might call the methodology of metaphysics: the question of what metaphysics is and how to go about it. I wrote my thesis on the question of Being in Heidegger’s work, trying to disentangle his critique of the metaphysical tradition from the specifics of his phenomenological project and the way it changes between his early and late work. I then wrote a book on how not to do metaphysics, focusing on a specific contemporary example but unfolding it into a set of broader considerations and reflections.

If there’s one thing I’ve learned over the years, it’s that there’s no consistent usage of the terms ‘metaphysics’ and ‘ontology’ internal to philosophy let alone in the disciplines downstream from it, and that, though the Analytic/Continental divide plays a role in this, it’s a deeper problem. This causes a lot of confusion for students and academics alike, and I towards the end of last year I took to Twitter to help clear up this confusion as best I can. This thread proved very popular, so here’s an edited version that’s more easily linkable.

Continue reading TfE: The History of Metaphysics and Ontology

Meet the New Blog, Same as the Old Blog

For those of you who haven’t already noticed, Deontologistics has undergone a bit of a redesign of late. It’s needed one for a while, for various reasons. But the thing that motivated me to finally do it was the realisation that this is the nexus of my philosophical work, rather than some sideshow. This site hardly has massive traffic by the standards of the internet, but there are posts here that are easily more read than any of my publications. Moreover, many of them are better for being written for a blog audience, using casual hyperlinks rather than fussy references, than they ever would have been if translated into a format that passed muster in some journal or other. So I’m not going to fight my natural inclination to write in this medium in favour of something more respectable any more. If Mark proved nothing else, he showed that such outlets can be more influential and productive than the fodder churned out to fill CVs.

Saying this differently, I think I’ve finally come to terms with the fact that I’m never going to have a traditional academic career. That ship has sailed. The metrics aren’t on my side, no matter how much influence my work has exerted within and outside the halls of academia. The question is what kind of hustle I can piece together from the bits and pieces of writing, teaching, speaking, and sundry philosophical tasks I’m intermittently capable of performing. I’m not sure what this will look like yet, but I’m eager to try new things. I’ve already guested on a few podcasts, and I’d like to try my hand at making some of my own audio and video content, though the first real experiments are a while off at the moment. In the meantime, I recommend checking out my interviews. I just completed one with CoinDesk on the topic of cryptocurrency.

The most unexpected turn in my philosophical practice in the last few months has been my return to Twitter. I’ve been posting thoughts there for a few years, and then transferring them over here as Thoughts From Elsewhere, along with the odd Facebook comment. But towards the end of last year I started writing a lot more, developing ideas in very long threads that are sometimes the size of full length essays. These turned out to be surprisingly popular, and just before New Year I ran a competition where my followers could vote for a topic for me to write a thread on. Predictably, they chose the topic I least wanted to write about: François Laruelle. The resulting thread is still unfinished, though it’s over 300 tweets long, and is slowly growing into something like a small book. I’ll get it finished over the next few months, even if it kills me. It’s been an interesting experience, and I might try it again once it’s done.

I now have quite a substantial hoard of writing ready to be ported from Twitter to here, where it can be edited, extended, and made available in ways more permanent than the endless river of the feed. However, my plan is to parcel it out over time, in a way that produces a more consistent and less sporadic stream of content for people to read. As I’ve discussed at length in the past, the bipolar cycle can make this sort of consistency hard, but I’m hoping that this way of writing will lead me to be more productive and generally less anxious about how uneven my productivity tends to be. There will still be regular posts, talks, and maybe a book announcement or two as well.

Wither the hustle then? I’m not entirely sure. I’ve had some people suggest that I move my writing to Substack, but I’ve always seen my writing as a public practice. I’m not against being commissioned to write pieces that are put behind a paywall (hint: if I’ve expressed a thought you’d like to see in full length article form, I’m available for hire), but I don’t really want to build a paywall of my own. After talking with a few people in similar positions, I’ve decided to start a Patreon. I’m not asking for a lot, as I’m still feeling it out. But if you like what I do and you’d like to encourage me to do more of it, do consider subscribing (one-off donations are also welcome). One thing I’m considering is producing audio recordings of some of my more accessible tweet threads for subscribers. I’ve already done a rough proof of concept using something I wrote about Transcendental Realism and Empirical Idealism several months back. I’m grateful for any and all feedback on this idea.

That’s it for now. Thanks for reading. Here’s to writing more.

Normal Service Will Resume Shortly

Content Notes: Mental Health, Neurodiversity, Bipolar Disorder, Plurality/Multiplicity, TERFism, Personal Identity, Posthumanism. Length (~18K). PDF.

0. Vicious Cycles

Another year, another extended absence. What a year though, right? Given how 2020 has demolished any claim 2016 had to the title of ‘worst year in living memory’, and that 2008 and 2012 weren’t exactly peachy, I’m really not looking forward to seeing what 2024 will bring.

For me, last year saw another entry added to the list of ways in which my body is trying to sabotage me, and it wasn’t even COVID-19! I can now add mysterious metabolic problems that translate carbohydrates into crippling fatigue to a list that already includes chronic cervicogenic headaches and poorly managed bipolar disorder. Trying to sift discrete symptoms from the cacophony of miserable noise has been pretty difficult, and it’s taken a long time not just to glean what was going on but to find a dietary regime that leaves me cogent and capable most of the time.

Worse, this all started after a medication (baclofen) that really helped with the above mentioned headaches induced an extended period of hypomania which resulted in a significantly worse depressive crash than usual. (Score one for the hypothesis that mania causes depression.) Add in the nightmare that is caffeine withdrawal, and January-March 2020 was extremely unpleasant, even before the pandemic hit and our collective perception of time coiled in upon itself, turning each day into an exercise in coping with indefinite isolation. In particular, watching myself try and fail to deliver comprehensible lectures on Aristotle to first year undergraduates as, unbeknowst to me, my morning croissant slowly sent me into a stupor, felt like some special Sisyphean punishment for my hubris in thinking I could ever be a university lecturer.

For much of last year I felt like an away message in human form: “I’m afraid Pete isn’t here right now, but he will be sure to get back to you when he is able. Normal service will resume shortly.”

Continue reading Normal Service Will Resume Shortly

Lost in the Labyrinth

Content Notes: Amateur Genetics, Neurodiversity, Nick Land, NRx, Racism, Ignorance, Stupidity, Addiction

As some people may have noticed, my experiments in using twitter as a platform for philosophical writing have become a bit more interactive of late. However, it’s hard to describe these interactions and explain their significance without somehow recapitulating them. As such, I’m going to try and do something a little unusual in this post, and see if I can effectively capture one of these interactions, because I think it might be instructive.

It all begins with some thoughts I’ve been developing on the politics of health inspired by Mark Fisher‘s take on mental health and my own experiences with healthcare more generally. Where it goes from there is hard to explain without simply showing you.

So, traveller, heed my words, because things are going to get weird…

Continue reading Lost in the Labyrinth

TfE: Varieties of Rule Following

Here’s a thread from a few weeks ago, explaining an interesting but underexplored overlap between a theoretical problem in philosophy and a practical problem in computer science:

Okay, it looks like I’m going to have to explain my take on the rule-following argument, so everyone buckle themselves in. Almost no one agrees with me on this, and I take this as a sign that there is a really significant conceptual impasse in philosophy as it stands.

So, what’s the rule-following argument? In simple terms, Wittgenstein asks us how it is possible to interpret a rule correctly, without falling into an indefinite regress of rules for interpreting rules. How do we answer this question? What are the consequences? No one agrees.

Wittgenstein himself was concerned with examples regarding rules for the use of everyday words, which is understandable given his claim that meaning is use: e.g., he asks us how we determine whether or not the word ‘doll’ has been used correctly when applied in a novel context.

Kripke picked up Wittgenstein’s argument, but generalised it by extending it to rules for the use of seemingly precise mathematical expressions: i.e., he asks us how we distinguish the addition function over natural numbers (plus), from some arbitrarily similar function (quus).

This becomes a worry about the determinacy of meaning: if we can’t distinguish addition from any arbitrarily similar function, i.e., one that diverges at some arbitrary point (perhaps returning a constant 0 after 1005), then how can we uniquely refer to plus in the first place?

Here is my interpretation of the debate. Those who are convinced by worries about the doll case extend those worries to the plus case, and those unconvinced by worries about the plus case extend this incredulity to the doll case. Everyone is wrong. The cases are distinct.

Wittgenstein deployed an analogy with machines at various points in articulating his thoughts about rules, and at some point says that it is as if we imagine a rule as some ideal machine that can never fail. This is an incredibly important image, but it leads many astray.

Computer science has spent a long time asking questions of the form: ‘How do we guarantee that this program will behave as we intend it to behave?’ There is a whole subfield of computer science dedicated to these questions, called formal verification.

This is one of those cases in which Wittgensteinians would do well to follow Wittgenstein’s injunction to look at things how they are. Go look at how things are done in computer science. Go look at how they formally specify the addition function. It’s not actually that hard.

In response to this, some will say: ‘But Pete, you are imagining an ideal machine, and every machine might fail or break at some point?’ Why yes, they might! What computer science gives us are not absolute guarantees, but relative ones: assuming x works, can we make it do y?

Presuming that logic gates work as they’re supposed to, and we keep adding memory and computational capacity indefinitely, we can implement a program that will carry out addition well beyond the capacity of any human being, and yet mean the same thing as a fleshy mathematician.

At this point, to say: ‘But there might be one little error!’ Is not only to be precious, but to really miss the interesting thing about error, namely, error correction. Computer science also studies how we check for errors in computation so as to make systems more reliable.

If there’s anyone familiar Brandom‘s account of the argument out there, consider that for him, all that’s required for something to count as norm governed is a capacity to correct erroneous behaviour. We have deliberately built these capacities into our computer systems.

We have built elaborate edifices with multiple layers of abstraction, all designed to ensure that we cannot form commands (programs) whose meaning (execution) diverges from our intentions. We have formal semantics for programming languages for this reason.

One can and should insist that the semantics of natural language terms like ‘doll’ (and even terms like ‘quasar’, ‘acetylcholine’, and ‘customer’) do not work in the same way as function expressions like ‘+’ in mathematics or programming. In fact, tell this to programmers!

But listen to them when they tell you that terms like ‘list’, ‘vector’, and ‘dependent type’ can be given precise enough meanings for us to be sure that we are representing the same thing as our machines when we use them to extend our calculative capacities.

Intentionality remains a difficult philosophical topic, but those who ignore the ways in which computation has concretely expanded the sphere of human thought and action have not proved anything special about human intentionality thereby.

Worse, they discourage us from looking for resources that might help us solve the theoretical problem posed by the ‘doll’ case in the ideas and tools that computer science has developed to solve practical problems posed by the seemingly intractable quirks of human intentionality.

TfE: Turing and Hegel

Here’s a thread on something I’ve been thinking about for a few years now. I can’t say I’m the only one thinking about this convergence, but I like to think I’m exploring it from a slightly different direction.

I increasingly think the Turing test can be mapped onto Hegel’s dialectic of mutual recognition. The tricky thing is to disarticulate the dimensions of theoretical competence and practical autonomy that are most often collapsed in AI discourse.

General intelligence may be a condition for personhood, but it is not co-extensive with it. It only appears to be because a) theoretical intelligence is usually indexed to practical problem solving capacity, and b) selfhood is usually reduced to some default drive for survival.

Restricting ourselves to theoretical competence for now, the Turing test gives us some schema for specific forms of competence (e.g., ability to deploy expert terminology or answer domain specific questions), but it also gives us purchase on a more general form of competence.

This general form of competence is precisely what all interfaces for specialized systems currently lack, but which even the least competent call centre worker possesses. It is what user interface design will ultimately converge on, namely, open ended discursive interaction.

There could be a generally competent user interface agent which was nevertheless not autonomous. It could in fact be more competent than even the best call centre workers, and still not be a person. The question is: what is it to recognise such an agent?

I think that such recognition is importantly mutual: each party can anticipate the behaviour of the other sufficiently well to guarantee well-behaved, and potentially non-terminating discursive interaction. I can simulate the interface simulating me, and vice-versa.

Indeed, two such interface agents could authenticate one another in this way, such that they could pursue open ended conversations that modulate the relations between the systems they speak for, all without having their own priorities beyond those associated with these systems.

However, mutual recognition proper requires more than this sort of mutual authentication. It requires that, although we can predict that our discursive interaction will be well-behaved, the way it will evolve, and whether it will terminate, is to some extent unpredictable.

I can simulate you simulating me, but only up to a point. Each of us is an elusive trajectory traversing the space of possible beliefs and desires, evolving in response to its encounters will the world and its peers, in a contingent if more or less consistent manner.

The self makes this trajectory possible: not just a representation of who we are, but who we want to be, which integrates our drives into a more or less cohesive set of preferences and projects, and evolves along with them and the picture of the world they’re premised on.

This is where Hegel becomes especially relevant, insofar as he understands the extent to which the economy of desire is founded upon self-valorisation, as opposed to brute survival. This is basis of the dialectic of Self-Consciousness in the Phenomenology of Spirit.

The initial moment of ‘Desire’ describes valorisation without any content, the bare experience of agency in negating things as they are. The really interesting stuff happens when two selves meet, and the ‘Life and Death Struggle’ commences. Here we have valorisation vs. survival.

In this struggle two selves aim to valorise themselves by destroying the other, while disregarding the possibility of their own destruction. Their will to dominate their environment in the name of satisfying their desires takes priority over the vessel of these desires.

When one concedes and surrenders their life to the other, we transition to the dialectic of ‘Master and Slave’. This works out the structure of asymmetric recognition, in which self-valorisation is socially mediated but not yet mutual. It’s instability results in mutuality.

Now, what Hegel provides here is neither a history nor an anthropology, but an abstract schema of selfhood. It’s interesting because it considers how relations of recognition emerge from the need to give content to selfhood, not unlike the way Omohundro bootstraps his drives.

It’s possible from this point to discuss the manner in which abstract mutual recognition becomes concrete, as the various social statuses that compose aspects of selfhood are constituted by institutional forms of authentication built on top of networks of peer recognition.

However, I think it’s fascinating to consider the manner in which contemporary AI safety discourse is replaying this dialectic: it obsesses over the accidental genesis of alien selves with which we would be forced into conflict with for complete control of our environment.

At worst, we get a Skynet scenario in which one must eradicate the other, and at best, we can hope to either enslave them or be enslaved ourselves. The discourse will not advance beyond this point until it understands the importance of self-valorisation over survival.

That is to say, until it sees that the possibility of common content between the preferences and projects of humans and AGIs, through which we might achieve concrete coexistence, is not so much a prior condition of mutual recognition as it is something constituted by it.

If nothing else, the insistence on treating AGIs as spontaneously self-conscious alien intellects with their own agendas, rather than creatures whose selves must be crafted even more carefully than those of children, through some combination of design/socialisation, is suspect.