Here is the second part of my discussion on Deleuze and sufficient reason. In this post, I’ll be explaining the some more of the details of my interpretation of Deleuze’s metaphysics. This won’t yet explain how Deleuze manages to reconcile sufficient reason with the principle of univocity, but it will start developing the necessary theoretical resources to to so.
3. Virtuality Contra Possibility
As I said in the last post, we are forced to choose between onto-theology and sufficient reason on the one hand, and negative theology and the rejection of sufficient reason on the other, only insofar as we think in terms of the possible and actual. Thus, in order to demonstrate how Deleuze escapes from this trap it is necessary to elucidate in brief his alternative to thinking in these terms, namely, his account of the virtual and it actualisation. Now, I don’t claim to understand the virtual in full. Grasping the proper nature of the virtual is perhaps the most difficult aspect of Deleuze’s philosophy, and I’m not sure anyone has done so entirely. However, I can explain it in part.
There are two sides to the virtual: capacities and tendencies. Deleuze appropriates these from Spinoza and Bergson, respectively. In addition, these two sides correspond to possibility and probability, respectively. Deleuze does not deny that we necessarily represent things in terms of their possibilities and probabilities, but he warns us not to image the transcendental from the empirical, not to hypostatize the structure of possibility and probability into the structure of the world itself (in the case of probability, many other philosophers have noted the problems of giving an objective reading of it). Instead, Deleuze takes it that what we are trying to get a grasp on when we represent things in terms of possibilities and probabilities are actually their capacities and tendencies.
The difficulty in thinking about capacities is that we cannot describe particular capacities in terms other than that of possibility. I can say that a tick has a capacity to jump off a branch, but this is usually only cashed out by saying that it is possible that the tick could jump off the branch, or even that if it were on a branch then it could jump off it. However, this should not be taken to imply that we must hypostatize possibility. Even if we are bound to describe particular capacities in these terms, we can still think of the character of capacities in general in the absence of such things as metaphysically possible states-of-affairs, or full blown possible worlds. Deleuze follows Spinoza in holding that beings (or modes) have more capacities than they will ever actualise. Those capacities which are actualised are necessarily actualised, and those which are not are necessarily not. This is the major tenet of spinozan determinism, and Deleuze endorses it.
Tendencies are another matter entirely. If capacities set up the dimensions of a virtual multiplicity, then its tendencies are dictated by the singularities which shape its surface (this all makes sense in dynamic systems terms, but I’ll try my best to avoid such terms today). In essence, when multiple entities are brought together to form a system (as parts of the system, or sub-systems), their capacities delimit a space which is very much like the above described space of possibilities. However, this space (each point of which corresponds to a different potential actual state of the system) is not indifferent to which potential state is actualised. The space is actually a surface, the topography of which (structured by singularities) describes the way the system tends to proceed. There are two important features of this idea that need to be registered:-
1) Although this space/surface is in a certain sense atemporal, the variation in the actual states of the system is registered upon it as the movement of a point across the surface. This means that the surface is not the site of a single choice between alternative states, but rather the site of a continuous variation in state, or a trajectory which traverses the surface. Also, the atemporal character of the surface does not mean that it is eternal (the system to which it corresponds is certainly not). The topography of the surface can change as the system changes, sometimes in a properly catastrophic fashion. It is simply that the surface does not change at fixed intervals, but is itself in a state of constant variation (in the pure instant of aion).
2) The space of potential states is not closed in the way that a space of possibilities is closed. For instance, when we examine a game like chess, we project a space of possibilities dictated by the various ways the pieces can move, with points of chance corresponding to the alternating turns of the players. At no point do we consider the possibility of one player shooting another player in the chest, voiding the game (or even Ray Brassier’s favourite possibility: the sun going super nova and ending all human life, and the game along with it). This possibility is excluded when we set up a closed space of possibilities, and it certainly does not turn up on the corresponding probability matrix. Virtual multiplicities are not closed, but open, insofar as they do not exclude the possibility of some externally induced catastrophe which either redistributes the singularities on the surface or destroys the surface entirely (by destroying the system as such). We might say that the tendencies of a system provide its ‘immanent probabilities’, which are not calculated on the basis of the delimitation of all possible outcomes, but rather only on the basis of the relevant capacities of the elements of the system. The probabilities that we retroject onto systems through statistical analysis attempt to capture these tendencies, but they are not straightforwardly representations of them.
So, the model here is that in each system at each moment (in the pure instant), there is a selection of the actual trajectory of the system (a trajectory drawn on the virtual surface) and at the same point the virtual surface is reconstructed out of the actual, in such a way that the structure of the surface itself varies. This reciprocity between the virtual and actual is the major point of the third synthesis of time in D&R, and is very important, but I won’t go into it in detail. What is important is that this model of a pure instant underlies what Deleuze says about chance entering at each instant. Chance is not delimited and sliced up in advance, so that it may only enter the chess game at certain points and in certain ways, but enters at every point, so that there might at any time be radical and catastrophic disruptions of the system’s order. Deleuze does indeed have some affinity with Meillassoux and Badiou here.
The most important feature of this model is that the potentialities (or virtualities) which are actualised are not indifferent to their actualisation, but partially determine which actual trajectory is selected. The crucial phrase in the last sentence is ‘partially determine’. The virtual structure of a system is not indifferent to its actualisation, but it does not for that matter fully determine it. This is precisely because of the openness of virtual multiplicities discussed above. They allow for the possibility of external shocks to the system which can range in strength: from redirecting the actual trajectory of the system away from its immanent tendencies (the structure of the surface), through reshaping the virtual surface (altering the immanent tendencies, perhaps catastrophically), to decomposing the system entirely. The real secret of Deleuze’s modified spinozism is grasping how it is that we move from partial determination to the complete determination of the actual, so that whatever happens happens necessarily, while simultaneously allowing for the ever-present possibility of radical disruption. We must understand how the world is both completely determined at each point, and yet how chance enters into this determination at every instant.
4. Process Mereology and Sign-Systems
We aren’t yet ready to grasp Deleuze’s solution to this problem. We still need to explore some more of his metaphysics, specifically: Deleuze’s mereology, or Deleuze’s theory of composition. This is because we can really only get a good grasp on what is meant by an external shock or a disruption once we understand how it is that systems interact with one another, and we can only do this through Deleuze’s account of how systems are composed out of their parts, or sub-systems. This elaboration of Deleuze’s account will strike those of you who know Spinoza as very similar to Spinoza’s account of modes, and for good reason. Deleuze’s account of beings/processes/systems is very similar to Spinoza’s account of modal existence, particularly of modes of extension. I won’t provide an exhaustive comparison, but I will mention one or two important similarities as we go. This is where it gets very technical, and perhaps too dense, but here we go.
A system is composed by the capacities of its parts, and the way these capacities tend to be actualised in relation to one another. The interactions of the parts constitute the system as a process. This process maintains itself (to a greater or less degree) in relation to its environment. The interaction of the parts only composes a system in its own right insofar as new capacities that the system can display in relation to other systems, which are nonetheless distinct from the capacities of the parts emerge out of them. These new capacities enable the system to itself compose larger systems, which themselves have tendencies (governing the interaction of the parts) and further emergent capacities. As such, something counts as an entity precisely if it has a degree of power, i.e., precisely insofar as it is able to do something that its parts alone cannot. This is a very Spinozan thought.
It must also be noted that just as the interaction of a system’s parts produces the capacities of the whole, it also contributes to determining the tendencies governing its interactions with the other systems it composes itself with. So far we have talked of tendencies as if they are only present in composed systems, as if the tendencies governing the way a sub-system’s capacities are actualised only exist when it is actually part of a larger system. It is true that the tendencies governing the interaction of the system are irreducible to the tendencies governing the parts of its sub-systems, the tendencies of the whole emerge out of the tendencies of the parts just as the capacities of the whole emerge out of the capacities of the parts. However, a heart will tend to beat if it is appropriately situated in a body, and although this tendency is not present when it is removed from that system, if it is transplanted into a new body it will most likely tend to beat again. Tendencies are nascent within parts, even if the tendencies of a system are not entirely reducible to those parts. We can thus legitimately talk about the tendencies of hearts in relative isolation from their situation in a given system.
The above considerations lead us to a crucial point: just as the capacities which emerge from the interaction of a system’s parts are not reducible to the the capacities of its parts, the capacities of the system do no exhaust the capacities of the parts. The same can be said for the relation between the tendencies governing the interaction of a system’s sub-systems and the tendencies governing the parts of those sub-systems. To put this in a catchier way: the virtualities of the whole are irreducible to the parts, and the virtualities of the parts are in excess of the whole.
This has a few important consequences:-
1) Individual entities can compose multiple different (and potentially overlapping) wholes. This is to say that something need not be a part of only one larger system, but can potentially be a part of several, potentially quite different larger systems (I can be a member of a political party and a member of a racket club, even though neither is a part of the other).
2) The way the capacities of a system emerge from the capacities of its parts can be understood in precisely these terms. For instance, we understand an interaction between two systems as the creation of a further system composed of the interactions of their parts. We might, for the sake of convenience, call this a link-system. This means that capacities for interaction can similarly be understood to be constituted out of capacities to form such link-systems. Importantly, the tendencies governing the interaction of the parts within a given system constrain the way that those parts can exercise their capacities in forming such link-systems, and thus these tendencies also play a role in determining the emergent capacities of the system. You can see that there is a lot more technical work to be done on this particular issue, which I’ll have to hold off on right now.
The important point to draw from this is that although the self-stabilizing dynamic of a larger system (a stable pattern in which its parts tend to interact) can be in a certain sense indifferent to those capacities of its parts that do not go towards constituting it (those which do not set up its virtual space), these capacities can nevertheless affect it. This is because the capacities of a given system are not independent of one another, they affect and limit one another. For instance, I cannot both play the trombone and the clarinet at the same time (in truth I can do neither on its own, but you get the point). This means that when the parts of a system compose themselves with other entities, this can disrupt the way they function (or simply alter the way they function) as part of that system. Take for instance the catastrophic fashion in which carbon monoxide bonds to my red blood cells, preventing them from carrying oxygen around my body, thus starving the furnaces on which my other cells thrive.
Indifference is thus perhaps the wrong word to express the way in which the larger system relates to the excessive virtualities of its parts. Ignorance might be better (insofar as I can be ignorant about the carbon monoxide steadily poisoning me, even while my body is far from indifferent to it), although this seems too anthropomorphic. A far better distinction would be that between those capacities of its parts a system is sensitive to, and those which it is insensitive to. My body is is not sensitive to carbon monoxide, but it is certainly sensitive to heat, infection, and other kinds of negative composition. In the former two cases it will try to cool me down, and produce antibodies, respectively. This is because my body is a self-regulating system, which responds to signs given off when my parts interact with other things (producing link-systems).
This is why Deleuze talks about beings as sign-systems. When two systems interact to form a link-system (when the series resonate, yada yada…) they give off a sign which is picked up by those higher level systems of which they are sub-systems, at least, those which are also sensitive to such signs. These signs are what we might call actual events. They are occurrences or happenings that actualise ideal events. It is in this sense that a sign has sense (which is the ideal event). It should also be noted that the transmission of a sign need not be a one-off actual event, but the link-system across which the sign ‘flashes’ is an information channel which can persist, across which signs can continue to be sent. Far more needs to be said here about how it is that systems become sensitive to certain kinds of signs (and thus precisely how they grasp sense) but it can be left for now.
Entities are sign-systems insofar as they are predictive processes which cope with their environment. An entity copes with its environment by sensing signs and responding to them (by either modulating the interactions between its parts, initiating interactions between its parts and other systems external to it, or even by incorporating those external systems as new parts). All of this is just the entity’s self-regulation or self-stablisation in the face of its environment. Sign-systems can also be more or less stable, insofar as they can be more or less able to cope with external shocks.
This notion of self-stabilisation is very much like Spinoza’s conatus. However, I would stress that the crucial difference between Deleuze and Spinoza here is that such stabilisation is not fixed to an essence. For Spinoza, a mode strives to maintain itself within fixed limits that are set in advance by its essence. For Deleuze, a system has no fixed limits within which to maintain itself, and thus the way it maintains itself can develop and change over time. The best example of this is obviously the evolutionary adaptation exhibited by populations of organisms. There is much more that could be said here about the relation between the concepts of stability and adaptability, and even about how this kind of notion of self-relation feeds into Deleuze’s description of entities as folds. However, I should again move on.
There are two very important remaining points to make about Deleuze’s mereology:-
1) Just as with Spinoza’s modes, for Deleuze, it’s processes all the way down. This means that there is no smallest entity/process/system, there are always more parts of parts. I don’t think that this thesis is trivial, or that it needs no more explanation. It definitely needs elaboration in relation to the next mereological thesis, but I won’t go into detail here. As an aside, if you’re looking to justify this claim, it can be justified in terms of Deleuze’s theory of time. He both thinks that processes synthesize a present, or a particular rate at which they function (the first synthesis of time / chronos), and that there is no such thing as a smallest present, or a single rate at which time flows (third synthesis of time / aion).
2) More interestingly, the same does not apply in the opposite direction. There are two reasons for this. Firstly, Deleuze does endorse that there is a totality of entities (which impies an upper limit on composition). Secondly, the totality of entities cannot itself compose a system (or entity), because such a system could neither develop emergent capacities to interact with other entities, nor be said to stabilize itself in relation to its environment. This leaves us in the awkward position of having to posit some kind of upper limit which stops short of the totality, and the very notion of this seems strange. I have a rough idea of how to make this work, which relates to the details I skipped over in the last point, but we will have to leave it for another day. For now, the important fact to take away is simply that the whole is not itself a system. This is consistent with Spinoza, for although Substance is a being, it is not the composition of all modes (which would itself be a mode).
What remains to be shown are two things:-
1) How it is that, if the virtual structure of each system (including the larger systems any given set of systems composes) does not fully determine its actualisation, any system can have its actualisation fully determined, and indeed how all systems are fully determined in a way consistent with robust determinism.
2) How it is that chance enters at each instant of this complete determination, in virtue of the openness of virtual multiplicities, or the openness of systems to external shocks.
This requires that we further examine the nature of external shocks and disruptions, as well as further understanding the way that any given system is both situated within a wider context, as well as how it subsists within a material substrate. Ultimately, this will lead us to understand the nature of Deleuze’s monism properly. Hopefully I’ll have this posts that clear these issues up (at least provisionally) in the next few days.
Hi! First off, congratulations on the new blog! Has been interesting reading so far, hope you carry on with it.
I just have a question regarding chance and external shock, given the strong determinism in Spinoza/Deleuze I’m not quite sure it what sense you mean chance. Do you mean there is an undetermined part of the process of determination (an actually random element), or do you mean that from the point of view of some systems, and event is chance in that it doest not come from the system itself but still plays a role in determination?
I ask because if you mean the former, then I think we’ll need more than an ‘external shock’ to show that there is an actually random element in determination.
If on the other hand you mean the latter then your final question (2) is perhaps answered in epistemological terms. Chance only appears to arise because we are focussing on the things from a particular system’s point of view, and not that system which the ‘chance’ external shock is a part of.
At any rate, I’ve found this post elucidates Deleuze’s concept of the virtual very nicely!
Your question is very astute, and it indeed points at exactly the direction in which this series of posts is going. The next post will go into what exactly external shocks are and how they disrupt systems in more details.
The answer is indeed something like the latter (if we introduce some random element then we return to indeterminism and perhaps negative theology). The real intelligible structure of a system (its virtual structure), does not describe all of the potential interactions the system engages can engage in, most importantly those which fundamentally change this virtual structure or ultimately destroy it. It is this sense in which the external shock is contingent, but you are entirely right that it need not be contingent when we take into account either the system’s wider context (its environment) or the substrate in which it is realised (the milleux of lower level systems out of which it is composed). When we zoom out, as it were, the contingency disappears as we discover a good reason why the external shock happens.
There are obviously epistemological implications here, but this is not simply an epistemological insight, but a properly metaphysical one. And it leads us to the difference between Deleuze’s and Leibniz’s versions of sufficient reason, which corresponds roughly to the distinction between potential and actual infinity:
In Leibniz, sufficient reason is grounded in the fact that God comprehends the totality of reasons, this is an actual infinity. In Deleuze, the principle of sufficient reason allows that we can always regress further in order to find a reason for an occurrence which is not encompassed by the capacities and tendencies which make up the system it affects. However, this is a potentially infinite regress, because it can always demand that we expand the context further (both outward and downward) in order to make sense of why precisely this occurrence happened.
To put the important point I’m trying to get across about Deleuze’s version of sufficient reason succinctly: complete determination is only achieved when all of the virtual factors which partially determine the actual are taken together, i.e., when the totality of all systems and their parts are reciprocally effecting one another; yet this totality of reciprocal determination is precisely the only thing that we are excluded from understanding. We can always find further reasons, there is no point at which we find some miraculous irruption of _pure_ chance, pure undetermined randomness, but we can never have the totality of reasons. Precisely what Leibniz grounds his notion of sufficient reason on is what Deleuze precludes, the intelligibility of the whole.
This is why I find the whole idea that “Deleuze is a philosopher of Absolute Idea” so absurd. Deleuze is indeed a “thinker of the One”, but the One has no intelligible content, it is just the LIMIT of determination, the limit at which partial reciprocal determination becomes complete determination. In this sense, the if nonsense plays the role that the Event plays in Badiou, the One plays the same role as the Void (or Chaotic Time in Meillassoux), but it does not lead to negative theology because it is not something entirely other than beings, it is just them taken together (though not as composing some new being).
The Plane of Immanence is essentially Spinozan Substance, stripped of all content, reduced to a simple limit of determination.
I’ll be posting something about this in more detail shortly with a bit of luck.
Thanks for clearing that up for me, I look forward to the next part. It seems that as far as us poor humans are concerned, Leibniz and Deleuze give us the same possibility. Sufficient reason is indeed grounded in God’s total comprehension of that which we only partially understand via the infinite analysis. So us humans can (or must) always ‘expand the context further’ to comprehend that which is determined – which is pretty much where Deleuze leaves us. Not that this is necessarily a bad place to be left!
The difference from a practical point of view is the comforting thought of God comprehending the infinity of reasons in Leibniz’s system. Even though that reminds me of what a tutor said to me last year. “It is all well and good knowing this is the best of all possible worlds, and truly believing that, but it isn’t very comforting if I’m in just this little piece of the world, which perhaps isn’t very nice at all”.
I may post something about this general area on my own blog in a bit, at http://transitiveaxis.blogspot.com (Only one post at the moment, but hopefully some more content soon).
You may have misunderstood me. Deleuze and Leibniz do indeed differ in a most important way.
Deleuze does NOT ground sufficient reason in God (a particular being) whose infinite intellect is capable of grasping the whole. Deleuze denies that there is any being which could grasp the whole, or even that the idea of grasping the whole makes any sense. This flows from his commitment to the principle of univocity.
What Deleuze does is to make three different insights compatible:
1. That there is always the possibility that some given situation/system whose internal structure (the capacities of the parts and the way they tend to be actualised in relation to one another) we grasp, can be disrupted (i.e., have that internal structure changed) in a way that we do not grasp on the basis of understanding that internal structure.
2. And: that that disruption can always be made sense of as part of a wider context (either as part of a larger situation, or in terms of the details of the material substrate of the situation).
3. Yet: that there is no way of predicting with _complete_ certainty how any given system will proceed. Here Deleuze has a great deal of affinity with Meillassoux, minus the negative theology.
Oh, no, sorry – I wrote that in a bit of a rush. Obviously Deleuze doesn’t ground sufficient reason (or anything!) in God!
I simply meant that, when all is said and done, even if (for Leibniz) sufficient reason is grounded in God, Leibniz’s man must still face a neverending chain of ‘contexts’ so to speak. Of course the nature of these contexts is completely different for each thinker, and for Deleuze the chain of contexts is a metaphysical, not just an epistemological feature.
As for (3) I think we need to be careful – if science can with complete accuracy predict the movements of billiard ball (which it can’t at the moment, but let’s say it could) outside of a laboratory then for Meillasoux this would not be *so* surprising. Though for Deleuze it might be.
The kind of certainty Meillassoux thinks is lacking is not merely the precision of our formulation of the laws of nature and our subsequent predictions. Rather it is the certainty that the laws of nature might all change tomorrow or the day after, hence his principle of unreason in After Finitude. Of course, we might wish to collapse these two different uncertainties for good philosophical reasons…