TfE: Immanentizing the Eschaton

Here’s a thread from a little while back in which I outline my critique of the (theological) assumptions implicit in much casual thinking about artificial intelligence, and indeed, intelligence as such.

Another late night thought, this time on Artificial General Intelligence (AGI): if you approach AGI research as if you’re trying to find algorithm to immanentize the eschaton, then you will be either disappointed or deluded.

There are a bunch of tacit assumptions regarding the nature of computation that tend to distort the way we think about what it means to solve certain problems computationally, and thus what it would be to create a computational system that could solve problems more generally.

There are plenty of people who have already pointed out the theological valence of the conclusions reached on the basis of these assumptions (e.g., the singularity, Roko’s Basilisk, etc.); but these criticisms are low hanging fruit, most often picked by casual anti-tech hacks.

Diagnosing the assumptions themselves is much harder. One can point to moments in which they became explicit (e.g., Leibniz, Hilbert, etc.), and thereby either influential, refuted, or both; but it is harder to describe the illusion of coherence that binds them together.

This illusion is essentially related to that which I complained about in my thread about moral logic a few days ago: the idea that there is always an optimal solution to any problem, even if we cannot find it; whereas, in truth, perfectibility is a vanishingly rare thing.

Using the term ‘perfectibility’ makes the connection to theology much clearer, insofar as it is precisely this that forms the analogical bridge between creator and created in the Christian tradition. Divinity is always conceptually liminal, and perfection is a popular limit.

If you’re looking for a reference here, look at the dialectical evolution of the transcendentals (e.g., unum, bonum, verum, etc.) from Augustine and Anselm to Aquinas and Duns Scotus. The universality of perfectible attributes in creation is the key to the singularity of God.

This illusion of universal perfectibility is the theological foundation of the illusion of computational omnipotence.

We have consistently overestimated what computation is capable of throughout history, whether computation was seen as an algorithmic method executed by humans, or a process of automated deduction realised by a machine. The fictional record is crystal clear on this point.

Instead of imagining machines that can do a task better than we can, we imagine machines that can do it in the best possible way. When we ask why, the answer is invariably some variant upon: it is a machine and therefore must be infallible.

This is absurd enough in certain specific cases: what could a ‘best possible poem’ even be? There is no well-ordering of all possible poems, only ever a complex partial order whose rankings unravel as the many purposes of poetry diverge from one another.

However, the deep, and seemingly coherent computational illusion is that there is not just a best solution to every problem, but that there is a best way of finding such bests in every circumstance. This implicitly equates true AGI with the Godhead.

One response to this accusation is to say: ‘Of course, we cannot achieve this meta-optimum, but we can approximate it.’

Compare: ‘We cannot reach the largest prime number, we can still approximate it’

This is how you trade disappointment for delusion.

There are some quite sophisticated mathematical delusions out there. But they are still illusions. There is no way to cheat your way to computational omnipotence. There is nothing but strategy all the way down.

This is not to say that there aren’t better/worse strategies, or that we can’t say some useful and perhaps even universal things about how you tell one from the other. Historically, proofs that we cannot fulfil our deductive ambitions lead to better ambitions and better tools.

The computational illusion, or the true Mythos of Logos, amounts to the idea that one can somehow brute force reality. There is more than a mere analogy here, if you believe Scott Aaronson’s claims about learning and cryptography (I’m inclined to).

It continually surprises me just how many people, including those involved in professional AGI research still approach things in this way. It looks as if, in these cases, the engineering perspective (optimality) has overridden the logical one (incompleteness).

I’ve said it before, and I’ll say it again: you cannot brute force mathematical discovery; there is no algorithm that could progressively search the space of possible theorems. If this does not work in the mathematical world, why would we expect it to work in the physical one?

For additional suggestive material on this and related problems, consider: the problem of induction, Godel’s incompleteness theorems, and the halting problem.

Anyway, to conclude: we will someday make things that are smarter than us in every way, but the intermediate stages involve things smarter than us in some ways. We will not cross this intelligence threshold by merely adding more computing power.

However it happens, it will not be because of an exponential process of self-improvement that we have accidentally stumbled upon. Self-improvement is not homogeneous, or without autocatalytic instabilities. Humans are self-improving systems, and we are clearly not gods.

More Atheology on Deleuze

Atheology has just put up another post on my interpretation of Deleuze, this time based on my more recent paper ‘Ariadne’s Thread: Temporality, Modality, and Individuation in Deleuze’s Metaphysics’ (available here). It’s a very generous and thorough reading of the paper, in relation to the other things I’ve written about Deleuze on the blog. Though he expresses a certain dissatisfaction with the unfinished character of the essay (it was written for an hour length presentation, and alas, was inevitably consumed by preliminaries) in parallel with his dissatisfaction at the unfinished character of my posts on Deleuze and Sufficient Reason (available here), he also says:

This strikes me as an extremely promising angle of approach and one which could easily yield a book-length treatment, perhaps under the title Ariadne’s Thread: Deleuze and the Song of Sufficient Reason. For me this approach represents tangible progress in the study of Deleuze’s thought.

I can only feel humbled by such praise, and would love to write this book one of these days. Alas, I am stuck in the same position as many of my compatriots, unsure as to which aspects of my work will lead to stable employment, so it’ll have to wait for now. That being said, I do intend to extend the ‘Ariadne’s Thread’ paper for publication at some point, once a few other commitments are out of the way. As such, the comments in Atheology’s post are very helpful and useful. However, there are a number of possible misunderstandings and points that can be addressed quickly, and so I will endeavour to do so here. I’ll try to number the points to keep them brief and organised. Continue reading More Atheology on Deleuze

Atheology on Deleuze and Sufficient Reason

This is a very quick post to point people at a new blog called Atheology (now linked in the side bar), which has just put up a post (see here) commenting on my ‘Song of Sufficient Reason’ series of posts on Deleuze and the PSR (see the Important Posts section). That series of posts never got finished for various reasons, the third instalment being lost somewhere along the way. However, a lot of the unresolved threads are picked up in my more recent ‘Ariadne’s Thread’ paper on the overall shape of Deleuze’s metaphysics (see the Other Work section, or the Video section). It’s wonderful to find someone commenting so perspicuously on work I thought everyone had forgotten about (myself included). I look forward to reading more from Atheology on these and other topics as it appears.

Deleuzian Catharsis

I’ve probably written before about my history with Deleuze, but I can’t think where exactly. For those who don’t know, I began my PhD thesis with the intent of working on Deleuze’s metaphysics and its implications for the philosophy of language, with an eye to combining it with Wittgensteinian pragmatism. The story goes that I couldn’t find the methodology I needed to adequately explain (let alone justify) Deleuze’s metaphysics, and so took a detour into Heidegger to acquire it. This was supposed to last a month or so, and ended up consuming four years of research and my entire thesis. I was also converted to Brandom’s Hegelian pragmatism in that time, and that has monopolised a lot of my other research efforts in the meantime. I’ve written the odd thing about Deleuze on this blog, but I haven’t seriously touched the books (let alone kept up with the secondary literature) in a good few years.

However, courtesy of my good friend (and prominent Deleuze scholar) Henry Somers-Hall, I recently got invited to give a paper at Manchester Metropolitan University on Deleuze’s theory of time. This was part of a larger workshop on Deleuze that was very successful indeed. A great event all around. Lots of things kept me from writing my paper until far too close to the deadline (I was working on it right up until the last minute), but it was a cathartic experience from beginning to end. Three years or so of pent up Deleuzian ideas came out all at once, and it produced a paper that is very dense, but not for that matter unaccessible. Moreover, the paper served as a wonderful vindication of my methodological detour, insofar as it displays the power of the critical framework I’ve been developing here and elsewhere. I’ve sometimes been accused of getting stuck at the level of critique, and never getting to the actual metaphysics. I think this is a pretty performative refutation of those criticisms.

I’m enormously pleased with the paper, and I was enormously gratified by the positive reception it received from the people at the workshop. There were some excellent questions and some great discussions afterwards. I’m reliably informed that the video of the various talks will be going up online soon, including Q&As, but I’ve decided to make minor revisions to my paper and post it up on the blog (here) while it’s still at the forefront of my mind. It’ll no doubt get revised further and turned into a proper publication at some point, but for now, enjoy!

Comments on Capitalist Realism (Part 1)

I recently finished reading Mark Fisher‘s Capitalist Realism. I’m very sorry it took me so long. Now I’m at the end of my thesis I’m starting to finally do things I’ve been putting off for a long time. Mark really must be praised for writing such an accessible and yet eminently perceptive and persuasive book. It touches on a number of issues I’ve been thinking about myself for a long time, and gives names to several phenomena that have been on the edge of my intellectual awareness for even longer. I don’t agree with all of it, and I can see numerous points where the discussion needs to be taken further, but these are merely signs of how thought provoking and well-written the book is.

As I’ve said, now I’m at the end of the thesis, I’m starting to pick up things I’ve put off, and start new projects again. Politics is what originally got me into philosophy. Specifically, I was motivated to take up theoretical philosophy by precisely what demotivated me to engage in practical political action: the problem of how it is possible to change anything in the current environment (an environment Mark so perspicuously circumscribes). I remember attending the big anti-war march just before the beginning of the Iraq war in London, the biggest peace protest in history at the time (I think), and seeing how easily it was assimilated and dissipated by the media-democratic complex. It struck me that a smaller number of people (with a smaller amount of public support behind them) brought down the Vietnam war, and yet this did precisely nothing. I was 17 at the time, and hoping to go into politics. That event disrupted my perspective and made me want to understand why it did nothing, and how it would be possible to do something. I’ve spent the last 7 years or so on a journey into high theory, acquiring a number of abstract theoretical tools along the way, and I think I’m finally ready to make my descent back toward concrete political issues. Capitalist Realism has only reinforced my resolve on this front.

To this end, I’m in the early stages of starting a new blog to discuss more concrete political issues. Deontologistics has always been very much a blog about abstract issues, and although I’ve touched on the odd bit of political and ethical theory here and there, that’s never been its purpose. The arrangements for the new blog are still coming together though (it doesn’t even have a name yet), so watch this space. The one thing I can tell you is that if there is one phrase that sums up its modus operandi, it’s this: political rationalism. Given all this, I feel that it’s a good idea for me to write up my thoughts on Capitalist Realism (or CR), as a preliminary to the work I’m hoping to undertake. This will be less of a summary of the book’s core ideas than an exploration of the terrain it covers from within my own theoretical perspective. This means adding some theoretical supplements and using these to sketch the ways in which I think some of Mark’s ideas can be developed. The other qualification to add here is that I’m not as well versed in political theory as I’d like, and so it’s quite possible that I’ll reinvent some theoretical wheels as I’m going here (especially with regard to Marx and Habermas). I’m very happy to have this pointed out to me.

As should be no surprise to regular readers, this will be a long post (this part is 16,000, which I believe is a new record). It started out life as an email to Mark and became somewhat excessive. It’s gotten so long that I’ve actually had to split it up into parts (the second has yet to be completed). Here is the first part, which involves more theoretical supplementation than political musing. The second part should get more concrete, or at least, as concrete as I am known to get.

Anyway, here we go…

Continue reading Comments on Capitalist Realism (Part 1)

Hijacking Correlationism

Graham recently put up an interesting post about the various positions within Meillassoux’s philosophical ‘spectrum’, and where OOP stands in relation to them (here, linked by Gratton here). This is most interesting, because it goes some way to confirming the diagnosis of OOP I made in my TR essay (here). Since most people don’t have the time to read the whole thing, I’ll recreate the basic elements of the argument here (with a certain amount of tweaking).

First of all, the core point of the essay is that the ‘spectrum’ of positions provided by Meillassoux is incomplete, and that there are at least two further important positions (not including OOP) that need to be added to it, which I call deflationary realism and transcendental realism. The revised range of possible positions should be something like: classical realism (Aristotle, Locke, etc.), classical idealism (Berkeley, Hegel, etc.), weak correlationism (Kant), strong correlationism (Wittgenstein, Heidegger, etc.), speculative materialism (Meillassoux), OOP (Graham, and perhaps related OOO variants), deflationary realism (Quine, McDowell, Brandom, etc.), and transcendental realism (me and potentially a few others). I won’t line these up in a spectrum, because I think there’s too many dimensions at work here.

The other relevant point that I made is that both Meillassoux and Graham justify their respective positions by hijacking the arguments for correlationism, albeit in different ways. This is very explicit in Meillassoux’s work, though has been somewhat more understated in Graham’s (although his post makes this explicit to some extent). On the basis of this, my argument was that if we undermine the arguments for correlationism directly, then we undermine the most powerful arguments in favour of both speculative materialism and OOP. This was then done by showing that despite the fact that correlationism is meant to be an epistemological position (or at least that we are supposed to be able to formulate it in purely epistemological terms), it depends upon certain implicit ontological (and thus metaphysical) assumptions. In effect, what Meillassoux and Graham do in hijacking correlationism is just to try and make these assumptions explicit, and work out their consequences. The problem is simply that once one recognises this, one sees that these are not metaphysical positions that are necessitated by non-metaphysical (epistemological or phenomenological) facts, but are just different ways to develop some existing metaphysical assumptions. Arguing against those assumptions thus undermines correlationism, speculative materialism, and OOP all at once.

This is a very schematic presentation of these ideas, which doesn’t show how the two sides link up. As such, I’m going to try and flesh it out a bit.

Continue reading Hijacking Correlationism

The Question of Being

When I began my thesis, I started with the naive assumption that most people knew what was meant by Heidegger’s ‘question of the meaning of Being’. Indeed, I thought I knew. The first two years were a systematic exercise in uncovering just how much others, and myself, had taken for granted that we understand what this question is, and simply proceeded to talk about other things, be it the specifics of Heidegger’s own philosophy or the relative merits of other attempts to answer this question.

There is a horrible irony in this. Heidegger raised the question of the meaning of Being in response to the fact that although we think we know what we mean by ‘being’, when pressed we are unable to say what it is precisely that we mean. Moreover, he showed that the fact that we did not see this as itself problematic indicated a historical trend of the forgetting of Being, perpetrated largely by metaphysics. Many of the thinkers who come after Heidegger acknowledge Heidegger’s diagnosis, and they go on to talk about Being in a properly theoretical register, but I get the sense that if they are pressed they are equally unable to say what it is they mean. Being thus becomes an almost empty concept in much philosophical discussion, used in a haphazard way that hinders real attempts at understanding and obfuscates its philosophical import. If anything, this is a worse forgetting of the issue than that perpetrated by metaphysics itself, because we have moved from mistakenly thinking that we know what ‘being’ means in a pre-theoretical way to mistakenly thinking we know what it means in a properly theoretical way. The former is a matter of familiarity while the latter is a matter of hubris.

Obviously, I’m not claiming that all post-Heideggerian thinkers are prone to this misunderstanding. However, I do think that much Heidegger scholarship, and some post-Heideggerian philosophical projects are simply not rigorous enough in delineating what they mean when they talk about Being or the question of Being. In this post I want to try and undo some of the obfuscation this causes by laying out what I take the question of the meaning of Being (or simply the question of Being) to be. Hopefully, this should also illuminate some things I have said elsewhere about the nature of ontology and its relationship to metaphysics (especially here).

One final warning: this post is very abstract. Such is the peril of thinking about Being. If you don’t want to deal with such heavy abstraction, my advice is to think about beings. This post is also very long, pushing 7,000 words this time. I thank anyone who takes the time to read the whole thing in advance, although it need not be consumed in one sitting.

Continue reading The Question of Being

Deleuze: The Song of Sufficient Reason – (Part 2)

Here is the second part of my discussion on Deleuze and sufficient reason. In this post, I’ll be explaining the some more of the details of my interpretation of Deleuze’s metaphysics. This won’t yet explain how Deleuze manages to reconcile sufficient reason with the principle of univocity, but it will start developing the necessary theoretical resources to to so.

3. Virtuality Contra Possibility

As I said in the last post, we are forced to choose between onto-theology and sufficient reason on the one hand, and negative theology and the rejection of sufficient reason on the other, only insofar as we think in terms of the possible and actual. Thus, in order to demonstrate how Deleuze escapes from this trap it is necessary to elucidate in brief his alternative to thinking in these terms, namely, his account of the virtual and it actualisation. Now, I don’t claim to understand the virtual in full. Grasping the proper nature of the virtual is perhaps the most difficult aspect of Deleuze’s philosophy, and I’m not sure anyone has done so entirely. However, I can explain it in part.

Continue reading Deleuze: The Song of Sufficient Reason – (Part 2)

Deleuze: The Song of Sufficient Reason

After another post on the structure of normativity I owe people some metaphysics, so I’m going to return to my continuing elaboration of Deleuze. In my earlier posts I have indicated how what I have called the strong version of the principle of univocity is at the heart of Deleuze’s metaphysics, in that many of the other decisions he makes in his metaphysics follows from it. I have also said that Deleuze’s system can be understood as a reinvention of Spinoza’s system to incorporate this principle (and thus also the ontological difference). In this post I want to talk about the other principle at the heart of Deleuze’s metaphysics, one which he shares with Spinoza: the principle of sufficient reason. In talking about this I hope to elaborate how other aspects of his metaphysics function, most importantly his monism.

I’ve been working on this post for a little while, and it’s ballooned to nearly 6000 words and climbing, so I’m going to break it up into parts. The first two parts I’m posting now will set the stage, and the following one’s will do some more in depth metaphysical work.

1. Sufficient Reason and Onto-theology

People have a tendency to ignore the fact that Deleuze accepts some form of the principle of sufficient reason, despite the fact that he says at one point that D&R is a book about sufficient reason. The fact that Deleuze accepts this is of a great deal of relevance in contemporary debates, given how fashionable it has become to reject it (see Badiou and Meillassoux, who I’ll will talk about a little below). However, the other important thing about Deleuze’s acceptance of the principle is that it at once both underscores his similarities with the key rationalist thinkers – Spinoza and Leibniz – but in doing so highlights the relevant ways he moves beyond them.

Continue reading Deleuze: The Song of Sufficient Reason