I’m always at a loss on how to start a post when the blog has been on hiatus for a while, which is something that seems to happen periodically with Deontologistics. The most recent hiatus has been a very long one, but it seems there are people still out there reading what comes out of this cognitive outflow vent. I’ve just returned from London, where I attended the third Matter of Contradiction conference: War Against the Sun, and the Speculative Aesthetics roundtable organised by James Trafford. These were both fantastic events, at which there was a palpable sense that certain divergent theoretical orientations are beginning to coalesce into a coherent trajectory of thought (indexed by the words ‘rationalism’, ‘accelerationism’, and ‘prometheanism’). I won’t say anything more about the content of these events, as the videos and transcripts of them will no doubt be appearing at some point, but I will mention that I had the opportunity to meet several very interesting people who knew me from the work I’ve posted here. This was very heartening, and convinced me that I should probably start putting some thoughts up here again.
I don’t have a lot of new material to put up here right now, as I’m currently working on the second half of my paper on Graham Harman (the first half of which is available here). However, after having some very interesting discussions with people on the topic of freedom (which I’ve written about in various ways: here, here and here), I realised that I had some old material languishing in a blog comment somewhere that some people might find interesting. As such, here’s some thoughts on the topic and its misappropriation by voluntarism.
1. Defining Freedom
I think it’s possible to define what freedom is in a reasonably helpful and comprehensive way, and that doing so cuts through a lot of unhelpful debates in which the notion is deployed in an one-sided fashion. The first step in this process is to split the discourse of freedom into its qualitative and quantitative questions.
Qualitative questions are precisely what thinkers like Kant, Heidegger and Sartre are dealing with when they talk about human freedom in its various guises. They’re talking about what it is for something to be a rational agent, which is meant to distinguish rational agents from other things (contra Spinoza). The problem is that this is often abstracted from causal questions regarding the genuine functional structure any causal system would have to have to count as such. Heidegger and Sartre (who is the most crude here) tend to collapse this into a brute ontological distinction between modes of Being (existence/occurrence, for-itself/in-itself, respectively), whereas Kant’s transcendental psychology tends not to be understood as the abstract functional architecture that it is (leading to Schopenhauer, Schelling, and other vulgar Kantians). The work of Wilfrid Sellars is crucial here insofar as gives us the resources to develop Kant’s functional account of rational agency in a totally naturalistically permissible fashion (see here).
It’s important to recognise that qualitative questions are not simply binary matters, such as: is this system an agent or not? They also deal with qualitative distinctions in the form that rational agency takes, including distinctions regarding better and worse forms that it can take. I tried to deal with some such distinctions in my comments on Mark Fisher’s Capitalist Realism (see here) under the heading of ‘the pragmatics of spirit’, showing that there’s important distinctions between the ways rational systems are structured to process their own motivations, thereby showing that what he calls ‘hedonic depression’ is characteristic of a fairly impoverished form of rational agency. This was pretty impenetrable I recognise, but it can and will be developed in a clearer way. This kind of qualitative differentiation becomes much more important and useful when we start thinking about collective agency, insofar as the differences between functional forms of social organisation are much more apparent (e.g., various forms of democracy, oligarchy, economic organisation, etc.), and much more stark in their normative consequences (e.g., with regard to the extent that a populace can be held responsible for the actions of its government).
Quantitative questions are precisely what thinkers like Spinoza, Foucault and (to some extent) Berlin are interested in. The reason they are quantitative rather than qualitative is that they are empirical rather than transcendental questions. This is to say that they’re interested in the specific causal mechanisms that instantiate the abstract functional structures that the qualitative questions deal with. There are two important distinctions to be had on the quantitative level: one between positive (Spinoza) and negative (Foucault) freedoms, and one between actional (mainly Spinoza) and rational capacities (mainly Foucault). The former distinguishes between bare capacities to effect results and the independence of these capacities from external influences (both in terms of affection and prediction), and the latter distinguishes between capacities to act upon the world and capacities to reason about one’s action upon the world. The former distinction is essentially dealing with two sides of the same coin, this coin being causal reasoning. The positive side corresponds to the default causal inference (A -> B) and the negative side to the potential defeasors that would undermine such an inference (C -> ~(A -> B)). The latter distinction deals with the capacities we have for producing effects at our disposal, which is to say, the choices we can make, and the capacities we have for deploying these capacities, which provide the conditions of the possibility of action.
These quantitative margins of freedom can all be increased, by increasing our abilities to do things, our ability to reason about which things we do and how we do them, and making these abilities counterfactually robust in the face of varying external conditions. The most interesting dimension of all this for me are the positive and negative rational capacities, which function as conditions of the possibility of action. As you may or may not know, I’m kind of obsessed by the idea of cognitive resources and the way these constrain both individual and collective decision making. For instance, I think it’s possible to define attention in functional terms as a quantitative positive rational capacity. It’s something one can have more of less of (in different kinds no doubt) and it is consumed as a resource in making decisions about how to deploy one’s other actional capacities. I’m equally interested in defining quantitative negative rational capacities, such as resistance to the hijacking of one’s attentional mechanisms (e.g., information filters) thereby conserving cognitive resources. These are the sorts of things that Foucault was interested in, insofar as Power is essentially defined as action upon decision making.
There’s a whole lot more that could be said here about how this ties into the Sellarsian/Brandomian account of free will based on RDRDs, but I’m going to leave it there. I think the qualitative/quantitative, positive/negative, rational/actional distinctions are pretty helpful in organising talk about freedom for now.
2. Diagnosing Voluntarism
Given these definitions, we can see that the real problem with voluntarism is that it occludes the causal (i.e., quantitative) dimensions of freedom. It essentially says: stop worrying about how free you are, because freedom is always a pure qualitative break in the causal order, an upsurge of noumenal causality that brings about change in the causal/social/historical order. The disastrous consequence of this is that it discourages us from actively working upon ourselves (both as individuals and as collectives) to make ourselves more free, by increasing both our positive and our negative freedoms.
Of course, some voluntarists partially mitigate this by adopting a loose sort of Spinozism, insofar as they’re willing to talk about what I’ve called actional capacities. This is motivated by a recognition that we actually have to engage in instrumental reasoning regarding the means at our disposal to achieve our ends. However, they fall short of dealing with rational capacities, which they close off in favour of some noumenal freedom of decision. This leads to them being unwilling to consider the ways in which our decision making abilities are both internally limited, and externally influenced. When applied to the level of collective freedom this is even worse, because it precludes thinking about organisational structure in any way whatsoever (i.e., the ‘will of the people’ will just spontaneously coalesce and then deploy the resources available to it).
On this basis, I think we can draw a loose distinction between theoretical and practical strands of voluntarism. The former is an academic perspective (e.g., Badiou, Zizek, Hallward, etc.) that claims freedom has nothing to do with our causal constitution, and thereby licenses ignorance about causal factors that effect and affect our freedom. The latter is less often explicitly defended than implicit in the rough patterns of discourse and action within a wide variety of left-activist groups (which I’ve elsewhere called pseudarchy). It holds that our freedom actually consists in (or is at least enhanced by) our ignorance of such causal factors: Don’t think too hard, because our freedom from Power is proportional to our ignorance of ourselves! This is a manifestly stupid insofar as it depends on viewing freedom in quantitative causal terms in order to license its demand for ignorance of this causal basis, but it’s depressingly common nevertheless.
3. Dislocating Subjects
There is a further dimension to the above discussion of qualitative freedom and rational agency that is worth considering here. This aims to determine which causal systems are capable of being counted as subjects, insofar as it determines which causal systems are capable as being counted as responsible for their thoughts and actions, insofar as they are capable of providing reasons for them. The additional dimension that complicates things consists in the fact that, although there are necessary functional constraints on what can count as a subject (e.g., we can’t decide to count the chair I’m sitting on as a subject, call him ‘Neil’, because he’s not capable of doing what he needs to do to be one), these do not provide sufficient criteria for subjective individuation (e.g., they don’t decide whether a human being is the same subject before and after an amnesiac episode). I think what fills the gap are socially instituted norms for determining who counts as who, i.e., for counting whether the same meat counts as the same person between two times. It’s as if causal systems capable of reasoning are counters moved about in the game of giving and asking for reasons. They have to have certain features to be a counter, but ultimately who they represent in the game is a function of the playing of the game itself.
An interesting way of thinking about this is that it turns the usual arguments about personal identity on their heads. Usually, the argument is a matter of what underlying feature makes a person a person, such that we can count an animated body at one time as responsible for the same thoughts and actions as an animated body at another. I’ve always hated these bloody debates. Now I know why, insofar as they get everything arse-backwards. The subject is nothing but the locus-of-responsibility, the being-counted-as-the-same-as in the relevant way. There can thus be different socially instituted criteria for determining how we divy up responsibility, and thus for how we individuate subjects. These are constrained by certain factors (e.g., ‘Neil’ can’t be subject on any reasonable criterion) but we can imagine all kinds of bizarre sci-fi scenarios that would challenge the current boundaries of our practices of individuation and force us to develop better ones (e.g., the uploading, copying, and merging of ego-patterns (check out the ideas found in Eclipse Phase)).
There’s two interesting upshots of this I think:-
i) I think Ray Brassier is right that we have to dissociate the rational subject from the phenomenal self (see here). Metzinger is right that there are no phenomenal selves as once understood, only self-models. The question is whether these self-models are limited to being models of the biological processes of the organism, or whether they can extend to be subject-models of the responsibilities that individuate that organism as a given subject. I think they can and do. Moreover, I think this is still perfectly consistent with Metzinger’s picture, insofar as we can imagine cases in which someone is mistaken about just who they are, not simply in the sense of asylums filled with people convinced they’re Napolean, but in the more disturbing sense of the same organism being systematically altered in such a way that it did not see the discontinuity involved in this alteration. For instance, if I was kidnapped and given elaborate neurosurgery (a la Neuropath), my self/subject-model could be hijacked in such a way that I could have all my beliefs, desires, and behavioural tendencies modified, and yet have my memories left in tact in such a way that the phenomenal continuity between the past self in those memories and the present self in my new experience is retained. I could be an empty vessel, a Manchurian candidate, no longer ‘Pete’ in any significant way, and yet be convinced I was.
Metzinger might be right in this case to say there really isn’t a question about whether I’m the same ‘self’, as all such selves are generated by precisely this kind of internal phenomenal continuity, for which there is no external standard by which to judge it. However, he would be wrong to extend this to subjectivity, insofar as we’d have social criteria (constrained by hard functional criteria) that could settle the question of whether causal system A and modified causal system B really count as the same person, regardless of what they think.
ii) It also reveals the somewhat counter-intuitive possibility of a subject-without-freedom. If subjects are just socially individuated loci of responsibility, then these loci needn’t be constantly in play, as it were. Temporary madness is a good example here. We downgrade the status of someone who is severely cognitively impaired, such that we no longer count them as fully responsible for their thoughts and actions. We give them truncated forms of responsibility, in much the way we do for children (what I’ve previously called sandbox responsibility). This doesn’t preclude us from giving them their status back once their cognitive machinery is functioning properly again. This is also a good example of how social criteria for individuating subjects and functional criteria for individuating rational systems can interact. However, this limbo status into which we place unembodied subjects can become more permanent.
The status of ‘being-Pete’ that would be taken away from me in the case of a severe psychotic episode would still be in some way attached to my body, given our reigning social criteria of subjective individuation, but upon the destruction of my body’s capacity to ‘house’ me, as it were, I’d be free floating. We currently have nothing in place to reinstitute such a status (e.g., cloning me a new body with sufficient similarity to do the job), but we do have ways of relating to such ‘defunct’ statuses. We do relate to the dead, and we do treat them as having certain rights, even though they may no longer have any active responsibilities. This is most clear in the case of past thinkers, such as Kant, who know stands for us as the exemplar of a certain set of theoretical ideas, which in his time he was responsible for defending. Any of us may stand in for Kant. We may take up his responsibility for him, in absentia. The very possibility of being able to do this, so that we can argue with the figures from our cultural past, who helped set the standards that we live by today, is essential if we are to be conscious of the content of these standards, the way they have developed, and the way they may yet develop. I think this is an interesting Hegelian insight that naturally follows from the above considerations.
Damn good to see you back on the screen, Pete!
One question: What happens when it becomes more ‘rational’ to count you as a MERE mechanism, not because of any tropisms on your part (as Dennett might put it), but because it genuinely generates better predictive/manipulative/explanatory outcomes?
Hi Scott, glad to be back! I’ll send you an email update shortly, once I’ve responded to the comments here. Speaking of which:-
Short Answer: We’ve got to be careful not to overdetermine what you mean by ‘rational’ there. I’m perfectly happy to admit that we can be treated as mere mechanisms, and that there are times in which we should be so treated, but the question of when it is ‘rational’ to do so is a contentious one, because it’s where all the normative issues pop up, and thus where all the substantive disagreements are going to be had.
Regardless, I think there are lots of important issues regarding predictability in this general vicinity. In particular, I’d like to emphasise the importance of Foucault’s work on technologies of power, and how these involve systems for predicting behaviour, not just influencing it. From this perspective, the project of increasing quantitative negative freedom involves treating oneself as a causal system in order to make oneself harder to predict. This complicates the simplistic alignment of causal/predictable and non-causal/unpredictable.
So you acknowledge that it could one day become more rational to abandon normative conceptions of rationality?
Excellent post, as usual, Mr. Wolfendale. I’m particularly interested in your thoughts on Brassier-on-Metzinger and the “vulgar” Kantianism that would ground freedom in the or a noumenal realm as opposed to anchoring it in the realm of rational subjectivity. In this sense, Hegel purified the notion of freedom from the vulgar ‘noumenalization’ of freedom that even Kant himself was guilty of indulging in from time to time, for example, in the Groundwork. Here are a few very sketchy thoughts on the matter…
If we follow a cognitive science-esque computer-jargon metaphor, we might say that any kind of self-conscious inquiry into human understanding, any critique of pure reason, etc., is akin to a computer program that runs its own software to reprogram its own hardware. The plasticity of the brain (“the brain that changes itself”) gives a material basis for this kind of auto-poeitic power. When we do this to mathematics we get….set-theory and all its sublime transfinite infinities, maybe? Though it’s certainly a contentious (ab)use of mathematics in speculative philosophy, one might take Cantor and Goedel to have shown that mathematics itself provides an absolute grounding for the kind of transcendental freedom Kantian philosophy argues for. If everything in nature is itself, as Zizek often says, incomplete (in the sense that every thing is suffering from a functional kind of anosognosia in regards to its causal history), then as a rational agent, finding oneself saddled with an antinomy and yet still able to end the antinomy practically (by giving oneself over to a norm or reason or law) allows us to dig up a new concept of causality that ends the regress in a way that mechanical causality cannot. This is teleological or autonomous normative judgment. That is, making oneself accountable to and for the norms one commits oneself to (this can be a private and public process).
Incidentally, this is why I like Metzinger’s concept of ‘autoepistemic closure’ since it gives an empirical explanation for why and how a self-conscious being is able to ‘complete’ itself not in spite of but in virtue of its inability to apply its mental laws to the empirical world (which includes itself as an empirical object) with 100% accuracy. Being able to apply different concepts at will is a kind of ‘cognitive poetry’ then. But once the mind is able to do this kind of self-conscious thinking, the mind should no longer be considered as any mere appearance or phenomena but also not simply a mere non-entity, or illusion as Hume (and Metzinger) suggest. I guess then we can follow Kant and say we can know ourselves reflectively (I think he says ‘practically’) as a noumenon (I never quite understood how Kant can stomach saying this) or we can go with the German Idealists and claim that self-conscious knowledge is knowledge that simultaneously satisfies the laws of the understanding and also satisfies the ideals of Reason and that we know this is not spurious knowledge or mere wishful thinking because sensibility ensures that we are indeed in actual contact with the empirical world and its objects. Thus when we are able to experience something that is neither an appearance nor a noumenon, we think an absolute, which would simply be the name of the transcendental under the modality of reality.
I see a potential road from Hume to Kant to Hegel sketched in this very relationship between the modal category of reality (what we take as the Real or the Actual) and the faculty of sensibility. Why do we associate the Real with the Sensible? It seems to me it is for this reason: because we experience ourselves as having reciprocal determinative effect with sensible objects, i.e., we are embodied. That is to say, there is a constant conjunction between the perception of spatio-temporal objects, sensations, and changes in our internal state (and here I mean in our consciousness). That is, our thoughts are determined by our being bombarded by sensations and passions, thus, we associate reality with physical causal efficacy. But what about the knowledge about this very association? No doubt it too has causal effects on our physical experience. But this knowledge is not a product of sensation, though it is a product of a physical process, namely cognition. But if cognition (which here can be limited to Verstand) is a physical process that has causal effect on our experience but is not an act of sensibility then the (mechanical) principle of causality governing the movement of physical bodies according to the logic of sensation must be supplemented by a different principle of causality that governs the movement of physical bodies according to the logic of cognition.
Now, jumping off from Aristotle it seems, Kant and Hegel suggest that the proper causal law to apply to the governance of our cognition is teleological. But modern reductive empiricism insists that the proper level of analysis is either chemical (Hegel’s discussion of ‘chemism’ seems to apply to modern psychiatric views about neurotransmitters etc.) or, now that we are in the age of computer science, informatic, which is logico-mathematical. This is an interesting place for science to be in since what started as an attempt to ground cognition in sensation has ended up by reducing sensation to a species of ‘natural logic’. Modern scientism takes the universe to be a kind of computer. This is seen for example, in Metzinger’s reduction of the external (i.e. real) world to a sea of “dynamic information processing”. And yet the only way he is able to make sense of all this data is to follow the biological sciences in their application of a principle of teleology or “teleofunctionalism” to the empirical data of neuroscience. Now, the question is how long are we able to postpone the introduction of a telos into the explanation of the world? To be sure, teleological judgment is always in the background of any mechanical judgment (since mechanism relies on the explanatory of power of ‘essences’ which define things in terms of a functional law or definition). If we go all the way back to the Big Bang then we say in the end that the universe or nature just is the way it is for no reason at all. Thus the ultimate law of Nature’s telos is Absolute Contingency. This is Meillassoux’s doctrine and the view perhaps implied by say Lawrence Krauss, in “A Universe from Nothing”.
The absolute contingency of the universe seems however, to fly in the face of two matters of fact, i) the stability of the laws of nature and ii) the fact that Life and Thought are governed by obedience to a different kind of teleological law, namely the Good and the True. Organic causality expresses a striving towards pleasure, or serenity, while Rational causality functions according to the laws of deductive inference and sufficient reason. Meillassoux for his part, counters (i) by invoking transfinite set theory to argue that the existence of our ordered and nomologically stable universe could have its own causal explanation in a multi-verse that generates differently calibrated universes at random. We just happen to inhabit one that is by chance stable. Countering (ii) however is a lot harder to do, since offering an argument against the reality of the Order of Truth is already to represent your thought as exemplifying such an order. Similarly, finding oneself motivated in thought and action towards certain ends and away from others seems to situate oneself in relation to the Order of the Good. Meillassoux says that even so, absolute contingency can explain the emergence of such orders as the Good and the True by invoking the unlimited power of pure chance. He also says that the alternative is to read Goodness and Truth into the universe itself, which is to invoke Theism. But because of the problem of evil, this view is incoherent and absurd.
But in my opinion, the more elegant solution is to take the undecidability (thanks to autoepistemic closure or even better, Bakker’s BBT) itself as a necessary condition for Freedom (self-determination, autonomy, infinite conceptual revisability), and to see Freedom (construed as such) as the Good. Thus, in finding ourselves unable to derive our own nature from the nature of the empirical universe as a purposeless ground, we find ourselves thinking self-consciously the Absolute Idea, which is more or less “what to make of things?”.
Hi Sean, it’s *Dr. Wolfendale* now! As ever there’s a lot to pull apart in your comments, and I’m not going to attempt to deal with all of it. I will however comment on your closing paragraph as I find much in it that I’m sympathetic to.
I see teleological reasoning as something we’re just stuck with insofar as it is both instrumentally useful in organising causal reasoning about all sorts of phenomena (principally biological) and transcendentally indispensable in structuring our reasoning about rational agents (both individually and socially). However, this thoroughgoing functionalism of mine is tempered by an equally trenchant refusal to admit that functions have any metaphysicial status. To reiterate one of my catchphrases: explanatory equivocity is compatible with ontological univocity. Nature is strictly purposeless, but that doesn’t stop us from reasoning *as if* there are purposes. I take this to be a strongly Kantian position (albeit perhaps more Kantian than Kant himself, in some sense).
This leaves us in an interesting position when it comes to thinking about the Good, precisely insofar as it forecloses the possibility of naturalising it in any way. It is here that I am happy to agree with you that there is a crucial link between the Good and Freedom, even if I don’t necessarily want to identify them for taxonomic purposes. The fact that Freedom as an abstract (qualitative) structure must be understood transcendentally before its concrete (inc. quantitative) manifestations can be studied empirically gives it precisely the right character to be the basis of an account of the Good. There’s a lot of work to be done making this idea precise, and I’m not sure it involves the concept of ‘Absolute Idea’, but there’s certainly confluences between us here.
However, whether autoepistemic closure or BBT provides anything like the condition of freedom is something I’m less sure about. I’m somewhat skeptical about the skepticism involved in both hypotheses, so I’ll have to demur for now.
The idea of “actively working upon ourselves (both as individuals and as collectives) to make ourselves more free” is something I’ve been very interested in over the last few months. I’ve been trying to expand on Kant’s concept of a “discipline of reason”, which he leaves as a sort of afterthought in the CPR. Deleuze mentions the Antinomies as being an important advance on the “classical image of thought” because they show that there are illusions internal to reason rather simply errors external to it; maybe the Discipline will be equally important because it shows that there is a work or training internal to reason rather simply an external prerequisite for it. I think this is also related to McDowell’s concept of reason/spontaneity/freedom as “second nature” which again I don’t think he develops enough.
So I’m wondering whether you see “actively working upon ourselves” and/or training as inherent to freedom/responsibility/normativity. Could there be an innately rule-following animal (it seems like generative linguistics claims there can be), or does the concept of rule-following imply being trained to follow rules? Does a system of norms that doesn’t include a norm of being responsible for our norms still count as rational? (Ie. Does following such a system of norms make you a subject, or is the norm of revisability, with the attendant responsibility of working upon ourselves, constitutive of subjectivity?) I’m also interested in whether you see being trained into a system of norms or into freedom, on the one hand; and actively working on ourselves to make oureselves more free, on the other, as continous or discontinuous.
These are all excellent questions. Here are some brief thoughts:-
i) I think you’re right that there is a relation here to McDowell’s idea about second nature, but I don’t use the concept precisely because he isn’t willing to develop it along the functionalist lines that I favour. It seems more like a placeholder concept than anything.
ii) I think actively working upon ourselves and/or training as inherent to freedom/responsibility/normativity, but only in a normative sense. What I mean by this is that I think there is a certain imperative to cultivate one’s freedom implicit in the structure of rational agency itself, but that it is entirely possible to fail to live up to this imperative. The extent to which one must actually be capable of revising one’s own habits and rational capacities in order to be an agent is another matter, and I’m inclined to think what is required is fairly minimal.
iii) With regard to the division of the burden of rule-following between nature and nurture I’m inclined to say that there’s no real principled line. I think that in fact, we’re creatures with a certain amount of hardwired rule-following capability which is then expanded upon socially, but that the precise ratio of these is contingent. I think it would be possible to hardcode an AI with a lot of the practical abilities we learn through training, while still enabling it to be extensible in the necessary ways.
iv) I do think that there is a certain sort of revisability which is constitutive for full blown rational agency, namely, the in principle ability to revise one’s commitments and the concepts that constitute their contents. The ‘in principle’ bit here is difficult to specify precisely, but it involves a technical discussion of what it is for a causal system to be extensible in the relevant ways. This is all more difficult material than I’d like to get into here.
v) I see training and self-construction as fundamentally continuous, and overlapping in many ways.
I’ve been meaning to get back to this. I didn’t organize my questions very well or properly explain the connection I see between them.
I think all my questions are related in some way to the distinction between rule-following behaviour and behaviour that is in accordance with a rule or describable by a rule. For something like rule-following to exist there have to be regularities of behaviour, but is that all there is to it? (Wittgenstein’s “This is simply what I do”.) If I understand Dennett correctly, he says that there’s no principled difference between the two; the only justification we need for using rule-following (intentional) descriptions is a pragmatic one, eg. when causal descriptions of behavioural regularities become too complicated to work with in predicting behaviour. This is pretty unsatsifactory to me: surely intentional descriptions are more appropriate to the behaviour of adult humans or even dogs than that of, say, rocks or photons? And again, the difference in appropriateness depends on something about the way humans and rocks are, not just on our current predictive convenience.
My first idea was that training could be the basis of the distinction, which is something that Wittgenstein seems to suggest and that McDowell takes up as well; but I don’t think this will work, at least not in a straightforward way. What is training, after all, but a part of the causal history of certain animals? Does teaching a child to read differ essentially from teaching a pigeon to press a button for food?
My other idea was that the revisability of norms or rules could be the basis of the distinction, but the apparent continutity between norm-revising and training makes this questionable too.
i) I think “placeholder” is a perfect characterization of McDowell’s concept of second nature: the only work it does in his system is to leave a space for meaning outside of “first” nature. (Even though McDowell says that our spontaneity doesn’t put us outside of nature, he doesn’t give anything more than a sketch of how spontaneous second nature fits into inert first nature.)
ii) I’d be interested in seeing you expand on this, if you’ve worked it out. What is the minimal standard of revision, and why is only this minimum required for rational agency. Could be worth a post if you’ve got the time to write it up. My own inclination is to set a rather high bar for rationality; ie. we are not rational most of the time. (Not that we’re irrational either, but that most of our behaviour is simply regular or in accordance with a rule rather than strictly rule-following.)
iii) My question about innately rule-following animals wasn’t meant as an empirical question, but as a conceptual one. It’s clear that humans, like other animals, are born with certain regularities of behaviour, but I think it’s also clear that a human newborn is not really “rational” in the sense of following rules. I guess it was a roundabout way of asking whether you see training as inherent to rationality; clearly you don’t. I’m not sure yet either way; it’s not sufficient for rationality, as I said above, but it might still be necessary. In my limited knowledge of AI, it seems like the most plausible approximations to human intelligence are the ones that emphasize learning or training rather than hard-wired behaviour (eg. neural nets).
iv) I’d be interested in seeing the technical material when you get a chance to write it up.
v) This is my first inclination as well, so instead of arguing for their continuity, I’ll try to determine what could distinguish them. The obvious suggestion is that self-construction operates on already-established norms and “for the sake of” a norm of rationality or freedom, whereas training operates on a norm-less animal and (I suppose) not necessarily for the sake of any particular norm.
Another obvious suggestion would be that training involves at least two participants, a trainer and a trainee, whereas self-construction can be solitary.