Here’s a thread from a few weeks ago reacting to the controversy that unfolded surrounding Natalie Wynn‘s twitter remarks on the complexities of asking for pronouns in certain contexts. This was written before her more recent video ‘Opulence‘, and the second act of that particular clusterfuck. It gave me an opportunity to articulate some of my thoughts on the problems of left-wing political culture, and the way these problems are exacerbated by its transposition and sometimes transmutation into various forms of online discourse. These are closely related to my thoughts on zero-sum politics, and will likely be relevant to some other things I want to say in future, so I think it’s good to get them down here.
Here’s a thread from a little while back in which I outline my critique of the (theological) assumptions implicit in much casual thinking about artificial intelligence, and indeed, intelligence as such.
Another late night thought, this time on Artificial General Intelligence (AGI): if you approach AGI research as if you’re trying to find algorithm to immanentize the eschaton, then you will be either disappointed or deluded.
There are a bunch of tacit assumptions regarding the nature of computation that tend to distort the way we think about what it means to solve certain problems computationally, and thus what it would be to create a computational system that could solve problems more generally.
There are plenty of people who have already pointed out the theological valence of the conclusions reached on the basis of these assumptions (e.g., the singularity, Roko’s Basilisk, etc.); but these criticisms are low hanging fruit, most often picked by casual anti-tech hacks.
Diagnosing the assumptions themselves is much harder. One can point to moments in which they became explicit (e.g., Leibniz, Hilbert, etc.), and thereby either influential, refuted, or both; but it is harder to describe the illusion of coherence that binds them together.
This illusion is essentially related to that which I complained about in my thread about moral logic a few days ago: the idea that there is always an optimal solution to any problem, even if we cannot find it; whereas, in truth, perfectibility is a vanishingly rare thing.
Using the term ‘perfectibility’ makes the connection to theology much clearer, insofar as it is precisely this that forms the analogical bridge between creator and created in the Christian tradition. Divinity is always conceptually liminal, and perfection is a popular limit.
If you’re looking for a reference here, look at the dialectical evolution of the transcendentals (e.g., unum, bonum, verum, etc.) from Augustine and Anselm to Aquinas and Duns Scotus. The universality of perfectible attributes in creation is the key to the singularity of God.
This illusion of universal perfectibility is the theological foundation of the illusion of computational omnipotence.
We have consistently overestimated what computation is capable of throughout history, whether computation was seen as an algorithmic method executed by humans, or a process of automated deduction realised by a machine. The fictional record is crystal clear on this point.
Instead of imagining machines that can do a task better than we can, we imagine machines that can do it in the best possible way. When we ask why, the answer is invariably some variant upon: it is a machine and therefore must be infallible.
This is absurd enough in certain specific cases: what could a ‘best possible poem’ even be? There is no well-ordering of all possible poems, only ever a complex partial order whose rankings unravel as the many purposes of poetry diverge from one another.
However, the deep, and seemingly coherent computational illusion is that there is not just a best solution to every problem, but that there is a best way of finding such bests in every circumstance. This implicitly equates true AGI with the Godhead.
One response to this accusation is to say: ‘Of course, we cannot achieve this meta-optimum, but we can approximate it.’
Compare: ‘We cannot reach the largest prime number, we can still approximate it’
This is how you trade disappointment for delusion.
There are some quite sophisticated mathematical delusions out there. But they are still illusions. There is no way to cheat your way to computational omnipotence. There is nothing but strategy all the way down.
This is not to say that there aren’t better/worse strategies, or that we can’t say some useful and perhaps even universal things about how you tell one from the other. Historically, proofs that we cannot fulfil our deductive ambitions lead to better ambitions and better tools.
The computational illusion, or the true Mythos of Logos, amounts to the idea that one can somehow brute force reality. There is more than a mere analogy here, if you believe Scott Aaronson’s claims about learning and cryptography (I’m inclined to).
It continually surprises me just how many people, including those involved in professional AGI research still approach things in this way. It looks as if, in these cases, the engineering perspective (optimality) has overridden the logical one (incompleteness).
I’ve said it before, and I’ll say it again: you cannot brute force mathematical discovery; there is no algorithm that could progressively search the space of possible theorems. If this does not work in the mathematical world, why would we expect it to work in the physical one?
Anyway, to conclude: we will someday make things that are smarter than us in every way, but the intermediate stages involve things smarter than us in some ways. We will not cross this intelligence threshold by merely adding more computing power.
However it happens, it will not be because of an exponential process of self-improvement that we have accidentally stumbled upon. Self-improvement is not homogeneous, or without autocatalytic instabilities. Humans are self-improving systems, and we are clearly not gods.
Here are some thoughts from a twitter thread a little while back, which expand on some of the ideas in my post about moral logic. Here’s the initial thought:
I am deadly serious about this. I think ought implies can is as close to an a priori truth about the normative as one can find. However, it’s important to interpret it in the right way. It’s generally used to reason in the contrapositive direction: if one cannot fulfil a purported responsibility, then there is no sense in which one must fulfil it (i.e., can-not implies may-not).
There are two important corollaries of this: (i) that infinite tasks need not be seen as impossible and thereby non-obligatory, insofar as there is a finite procedure that can be indefinitely iterated (e.g., an infinite series: 1 + 1/2 + 1/4 + 1/8… that converges on an ideal limit, namely, 2; this is Hegel’s true infinite); and (ii) that insofar as capacity is not static, there can be increased responsibility relative to increased capacity as easily as decreased responsibility relative to decreased capacity (‘with great power, comes great responsibility‘).
There is more that could be said about this, but I’ll restrict myself to the thread I used to elaborate the original tweet:
I’m really enjoying using twitter as a medium in which to do philosophy, because it forces me to make an argument that is organised in chunks, and to ensure that those chunks are more or less well formed and reasonably compressed. It then allows me to capture those thoughts here, and edit, extend, or recompress them. It also lets me use hyperlinks instead of references, which is impressively liberating. What comes out isn’t exactly perfect, but that’s the point: to enable one to make things better and better beyond the suffocating bounds of the optimal. As I have said many times before, I can be concise, but I find it much harder to be brief, but the twitter-blog feedback loop is helping me to work on that. Moreover, it has allowed me to think a bit more about the writing processes of different philosophers throughout history, some of whom are more or less accessible depending on the way they were able to write and permitted to publish.
I feel like Nietzsche and Wittgenstein would have loved twitter, while Leibniz would have preferred the blogosphere, and Plato would have immersed himself in youtube (Socrates: That’s all very well Glaucon, but you have not answered question with which we began: are traps gay?). I think Hegel would have preferred a wiki, and would have been a big contributor to nLab. I would have loved to have been Facebook friends with Simone Weil or Rosa Luxemburg, but I can imagine getting caught in flame wars with Marx and Engels if I stumbled into the wrong end of a BBS (Engels: Do you even sublate, bro?). I desperately wish I could subscribe to Bataille’s tumblr, or browse the comments Baudrillard made on /r/deepfakes before it got banned. I know the world would be infinitely richer for de Beauvoir and Firestone’s Pornhub comments, but we should be thankful we were spared Heidegger and Arendt’s tindr logs. However, I’m pretty convinced that nothing would have stopped Kant from thinking very hard for a couple decades and then publishing all his thoughts in several big books. We’d have gotten a few good thinkpieces from him (Clickbait: ‘This man overturned the authority of tradition using this one weird trick, now monarchies hate him!’), but nothing could have stopped his architectonic momentum.
Here’s another Facebook interaction that grew into something interesting, initiated in response to an observation by the incomparable Mark Lance:
I think that this is an interesting observation that I’m sure will feel familiar to anyone who pays attention to politics on FB, either because they use the medium for political communication, because they take a more anthropological approach to the ways this medium is changing the public sphere, or some combination of the two. It’s also something that will be of no surprise to anyone who has encountered the concept of intersectionality, no matter what they think about the evolution of the concept and the debates surrounding it and the cursed concept of ‘identity politics’. I was also not aware of Liam‘s post on the fallacy he calls ‘political omega-inconsistency‘, and I was absolutely delighted to learn about it. However, I was interested in articulating a slightly different sort of fallacious reasoning, and how it is involved in this phenomena of ‘one dimensionalism’ that Mark was putting his finger on:
So far, I’ve tried importing thoughts from Twitter. Today, I’d like to import some thoughts from Facebook, and to even import some that are not my own! I have a habit of writing massive comments on FB, and getting drawn into some complex discussions and sometimes even strident debates. I often only find the right way to express an important point in such moments. These moments are then lost in time, as the man said, like tears in rain.
Here’s a thought courtesy of Reza Negarestani, who, whatever else you might think of him, is probably in a better position to talk about the curious relationship between philosophy, the humanities, and the art world than anyone; and to treat the relations between these institutions not merely in terms of their possible, abstract configurations, (e.g., the way in which philosophy/theory might inform artistic practice), but in terms of their actual, concrete manifestations (e.g., the way in which art institutions contract with philosophers/theorists to provide intellectual prestige):
We are living in a world where the word philosophy is deemed inferior to even fields such as comparative literature, media studies, etc. This is not to say that such fields don’t think philosophically, but to merely point out the compromised states of thought in which a theorist in this or that field thinks philosophically yet either thinks of philosophy as antiquated or a dangerous enterprise while unconsciously parasitizing on it. I have heard the same things from the art people. Just because you know this or that philosopher, it doesn’t mean you know the meaning of philosophy, what it stands for or what the task of a philosopher entails. What is this hostility against philosophy by the very people who feed upon it? How can you actually talk about theory without the philosophical elaboration of the term theory?
Here’s another twitter thread from a few days ago, offering some tentative suggestions for reforming bits of academia that are unquestionably broken. I didn’t get much feedback from my twitter audience, so I’m wondering if people here might be more inclined to offer some critical responses. I think it’s increasingly important not only to have these conversations, but to be seen having these conversations. Over to you.
So, I switched to the official twitter app, and then once more failed to understand how it handles threading: my mistake was assuming that ‘add tweet’ meant ‘add tweet to thread‘ in this context. Apparently not.
Regardless, I’ve realised that it’s actually quite good practice to transfer some of my better social media contributions here, if only so I don’t lose them to the endless Heraclitean river of online content. For a while now I’ve been posting fragments of older writing under the heading of One from the Archives (OftA), so I’m going to tag these more spontaneous outbursts Thoughts from Elsewhere (TfE). Here’s the first one, about conceptualising freedom in the age of information.
It seems that I never feel so old as when I try to use twitter. I’ll be turning thirty-four on Tuesday, but returning to twitter after the better part of a year makes me feel like a man out of time, as if I’d gone to sleep and woken up in another decade. It seems that the twitter client I have on my phone won’t handle either the new expanded character limit, or the new threading mechanism, and storify is apparently no more.
So much for micro-blogging then.
Here’s tonights stream of twitter thoughts, compiled into a reasonably coherent sequence. It provides a glimpse of the bigger picture work I’ve been doing in philosophy of logic and mathematics over the last few years, which finally seems to be coalescing to the point at which I can be a bit aphoristic about it. I’ve taken the liberty of inserting a few links to make explicit what I’m referring to.
Once more, my hopes of posting more here were dashed by illness. Since the last time I posted I had to go through another extended process of changing pain medication, side effects, withdrawal, and all. However, the good news is that what I’m now taking is having a really positive effect. The pain is mostly in check, though the dizziness still strikes intermittently. The proof of this is that I managed to give a paper for the first time in over a year, at this year’s undergraduate conference in Newcastle.
I’ve had an interest in the philosophy of games for quite a while, having previously co-written a piece on the aesthetics of table top RPGs with my good friend Tim Linward (here), and a piece on Herman Hesse’s Glass Bead Game and the history of the concept of game for the eponymous journal (here). However, I’ve done a lot more research on the topic than has actually lead to publications, and this has slowly coagulated into an outline of a theory of games. This talk at Newcastle is the first attempt to present this outline, which I’m hoping to write up into something vaguely publishable soon.