In the philosophy of technology, substantivism is a critical position opposed to the common sense philosophy of technology known as “instrumentalism”. Instrumentalists argue that tools have no agency of their own – only tool users. According to instrumentalism, technology is a mass of instruments whose existence has no special normative implications. Substantivists like Martin Heidegger and Jacques Ellul argue that technology is not a collection of neutral instruments but a way of existing and understanding entities which determines how things and other people are experienced by us. If Heidegger is right, we may control individual devices, but our technological mode of being exerts a decisive grip on us: “man does not have control over unconcealment itself, in which at any given time the real shows itself or withdraws” (Heidegger 1978: 299).
For Ellull, likewise, technology is not a collection of devices or methods which serve human ends, but a nonhuman system that adapts humans to its ends. Ellul does not deny human technical agency but claims that the norms according to which agency is assessed are fixed by the system rather than by human agents. Modern technique, for Ellul, is thus “autonomous” because it determines its principles of action internal to it (Winner 1977: 16). The content of this prescription can be expressed as the injunction to maximise efficiency; a principle overriding conceptions of the good adopted by human users of technical means.
In Chapter 7 of Posthuman Life, I argue that a condition of technical autonomy –self-augmentation – is in fact incompatible with technical autonomy. “Self-augmentation” refers to the propensity of modern technique to catalyse the development of further techniques. Thus while technical autonomy is a normative concept, self-augmentation is a dynamical one.
I claim that technical self-augmentation presupposes the independence of techniques from culture, use and place (technical abstraction). However, technical abstraction is incompatible with the technical autonomy implied by traditional substantivism, because where techniques are relatively abstract they cannot be functionally individuated. Self-augmentation can only operate where techniques do not determine how they are used. Thus substantivists like Ellul and Heidegger are wrong to treat technology as a system that subjects humans to its strictures. Self-augmenting Technical Systems (SATS) are not in control because they are not subjects or stand-ins for subjects. However, I argue that there are grounds for claiming that it may be beyond our capacity to control.
This hypothesis is, admittedly, quite speculative but there are four prima facie grounds for entertaining it:
- In a planetary SATS local sites can exert a disproportionate influence on the organisation of the whole but may not “show up” for those lacking “local knowledge”. Thus even encyclopaedic knowledge of current “technical trends” will not be sufficient to identify all future causes of technical change.
- The categorical porousness of technique adds to this difficulty. The line between technical and non-technical is systematically fuzzy (as indicated by the way modern computer languages derived from pure mathematics and logic). If technical abstraction amplifies the potential for “crossings” between technical and extra-technical domains, it must further ramp up uncertainty regarding the sources of future technical change.
- Given my thesis of Speculative Posthumanism, technical change could engender posthuman life forms that are functionally autonomous and thus withdraw from any form of human control.
- Any computationally tractable simulation of a SATS would be part of the system it is designed to model. It would consequently be a disseminable, highly abstract part. So multiple variations of the same simulations could be replicated across the SATS, producing a system qualitatively different from the one that it was originally designed to simulate. In the work of Elena Esposito a related idea is examined via the way users of financial instruments employ uncertainty as a way of influencing the decisions of others through one’s market behaviour. Esposito argues that the theories used by economists to predict market behaviour are performative. They influence economic behaviour though their capacity to predict it is limited by the impossibility of self-modelling (Esposito 2013).
If enough of 1-4 hold then technology is not in control of anything but is largely out of our control. Yet there remains something right about the substantivist picture, for technology exerts a powerful influence on individuals, society, and culture, if not an “autonomous” influence. However, since technology self-augmenting and thus abstract it is counter-final – it has no ends and tends to render human ends contingent by altering the material conditions on which our normative practices depend.
Esposito, E., 2013. The structures of uncertainty: performativity and unpredictability in economic operations. Economy and Society, 42(1), pp.102-129.
Ellul, J. 1964. The Technological Society, J. Wilkinson (trans.). New York: Vintage
Heidegger, M. 1978. “The Question Concerning Technology”. In Basic Writings, D. Farrell
Krell (ed.), 283–317. London: Routledge & Kegan Paul.
Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London:
Winner, L. 1977. Autonomous Technology: Technics-out-of-control as a Theme in Political
Thought. Cambridge, MA: MIT Press.
My last post ended with a modest conclusion about the relationship between pragmatist accounts of agency and world-hood:
“For Davidson, and for pragmatists more generally, then, the ability to interpret and be interpreted in turn is a condition of intentionality and thus agency. But this requires both that each agent understand the other to believe that they belong to a shared world. Moreover, it requires that there be such a world – in some sense: absent this condition, there would be nothing to interpret.
But what is this idea of a shared world an idea of?
Under what conditions can two creatures be said to belong to one?”
Clearly, one way we might parse this notion of co-worldliness involves a form of a metaphysical realism owing little to phenomenological approaches.
There are different ways of expressing this realist account. We might say that reality is what some uniquely complete and true theory represents, or perhaps that it is the totality of states of objects on which the accuracy or truth of such representations hinges.
However, in the face of Putnam-style objections to the effect that there can be no such unique theory, the metaphysical realist can opt for a more minimal formulation: claiming only that the non-mental parts of the world exist independently of minded creatures, and that its nature is likewise independent of how it is thought. This does not commit realists to there being “one true and complete description of the world” (Devitt 1984: 229). There seems nothing incoherent in supposing that the best theories of the real might be incomplete and partial.
Indeed, the world as a whole might not be representable at all because there can be no complete representation of it, or because there are aspects of reality which are not representable at all.
But at this point the commonality of the real seems to be receding. If reality can only be described discrepantly, or if it is not fully representable, then what content can we attach to the idea of a shared world in Davidson’s conditions of interpretation and communication? According to idealists this idea of reality is not even intelligible. So if the common world is the world according to metaphysical realism, this may threaten the intelligibility of pragmatism and thus the local correlationism regarding agency which falls out of it.
I think this is a problem for any account in which, as as Robert Brandom “meaning and understanding are co-ordinate concepts, in the sense that neither can be properly understood or explicated except as part of a story that includes the other” (). For such understanding must be exhibited practically in a social field in which estimates of what speakers say or think are updated given the circumstances in which they are said or acted upon. Different theorists may describe these interpretations using different or discrepant vocabularies, but the presupposition of commonality seems to be built into any theory for which content is manifested through practice.
If pragmatist accounts of thought and agency require a common world, then perhaps they need an idea of world that is not an abstract metaphysical posit, but somehow implicated in agency and thought itself. And this is where phenomenology stands to pick up the slack left by metaphysical realism.
Phenomenologists frequently describe this experience of world-hood in terms of experience of things occurring in contexts or “horizons”. When I see a hammer, I see it from a certain viewpoint, or hear it falling off a workbench as the cat passes by. I may think of it as a force amplifier or a birthday present; but each thought or experience implies the possibility of perspectives further down the line. The hammer cannot be reduced to any of these: it is not determinate but, rather, determinable. Its objectivity consists of being always in excess of its appearances (Mooney 1992). A horizon is that aspect of an experience that implies non-actual possibilities for experience.
Roughly, we share worlds if my horizons overlap with yours. For example, I might not immediately grasp the significance of basil in your cookery, but could, given the opportunity to share food with you. My relationship to basil as it figures your life is not a formal semantic relationship. My conception of basil may involve different stereotypes – desiccated leaves on supermarket shelves, say – whereas you are punctilious about picking it fresh from the herb garden. Still your relation to basil is a determinable for me, even if it bears no relation to the way in which I currently prepare salads and sauces.
So, to recapitulate: local correlationism for agency (Condition 3) or Davidson’s observability assumption is best understood as falling out of pragmatism with regard to psychological and semantic concepts. And pragmatism (I have suggested) needs a correlational account of a world – a world likewise determinable in practice, rather than the transcendent world of metaphysical realism.
Admittedly, this seems to commit the pragmatist to a transcendental account of the world that might sit uneasily with the modestly naturalistic accounts of practices and norms in which such accounts are generally expressed. It also commits the pragmatist to anti-realism since the the world is not a determinate existing thing; nor could there be one transcending determinability (or verification).
But the relationship between pragmatism, realism and naturalism is debatable for other reasons, so it is not clear that naturalistic scruple alone should debar the inference from a pragmatist account of agency and subjectivity to a phenomenological theory of the world.
In Donald Davidson and the Mirror of Meaning, Jeff Malpas argues that interpretation must have this horizontal structure. All interpretation occurs in a context fixed by certain interests and projects. Any particular project can be frustrated or break down (Malpas 1992: 128). Any project must, moreover, open onto the constitution of a new project, just as each view of the hammer implies the possibility of other views. Thus pragmatism assumes that each project of understanding is “nested” within further possible projects.
This interleaving of interpretative projects is correlatively an interleaving of things. Beliefs cannot be identified independently of the determinables that believers engage with. By the same token, the identification of salient collections of objects and events occurs against the background of the interpreter’s experience and interests. The nested structure of projects described by Malpas thus constitutes a plausible candidate for a non-reified “world” – a world not of things, but of potential “correlations” between intentional agents and determinable objects.
This interleaving is only intelligible if we assume each project to have a hermeneutic structure referred to as “fore-having” within the hermeneutic tradition. Each interpretation must potentially fan out onto future revisionary interpretations (Caputo 1984: 158). Without appeal to this tacit or virtual structure, there is little content that can be given to the idea of a single intersubjective world that Davidson and the other pragmatists must appeal to.
It is precisely at this point, according to Malpas, that static concepts of a determinate world seem wholly inadequate and the temporalized models of intentionality and understanding developed in the phenomenological/hermeneutic tradition assume importance.
However, I think it is very doubtful that any phenomenological method can even tell us what its putative subject matter (“phenomenology”) is. This, as I will argue, is disastrous for idea of a temporally structured horizon that otherwise seemed so serviceable for the pragmatist.
Caputo, J. D. 1984. “Husserl, Heidegger and the Question of a ‘Hermeneutic’ Phenomenology”. Husserl Studies 1(1): 157–78.
Devitt, Michael. 1991. “Aberrations of the Realism Debate”. Philosophical Studies 61(1): 43–63.
Malpas, J. E. 1992. Donald Davidson and the Mirror of Meaning: Holism, Truth, Interpretation. Cambridge: Cambridge University Press.
Mooney, T. 1999. “Derrida’s Empirical Realism”. Philosophy & Social Criticism
Roden, David. 2013. “Nature’s Dark Domain: An Argument for a Naturalised Phenomenology”. Royal Institute of Philosophy Supplements 72: 169–88.
Roden, David (2014), Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.
Roden, David (Forthcoming). “On Reason and Spectral Machines: an Anti-Normativist Response to Bounded Posthumanism”. To appear in Philosophy After Nature edited by Rosie Braidotti and Rick Dolphijn.
Just an attempt in progress to clarify an argument regarding the plausible dependence of pragmatist theories of intentionality on phenomenological worlds.
Interpreters can differ in many ways that are irrelevant to their interpreterhood – differences in language, embodiment, gender, etc. However, if there are essential features common to all interpreters these might show up in the way subjects relate to the world and to other subjects. Phenomenology has traditionally been viewed as a powerful method for describing such relations. So if there are minimal conditions for being an interpreter, maybe phenomenology can help us spell them out.
Let’s put more bones on this. Condition 3 in the paradox of the Radical Alien corresponds to what Donald Davidson calls “the observability assumption”. This states that “an observer can under favorable circumstances tell what beliefs, desires, and intentions an agent has.” (Davidson 2001b, 99) In other words if x is an agent, x must be interpretable given ideal conditions.
This view finds it home a family of broadly pragmatist, post-Cartesian positions according to which the role of concepts such as meaning, belief, desire or intention is to render agency intelligible in the light of reasons. Having intentionality is, at some level, just the ability to conform to standards of rational agency. If so there is no secret to being an agent – beliefs and thoughts are not hidden states of the soul. At a bare minimum, an agent must be an intentional system; one that, as Dennett puts it, is “voluminously predictable”when assessed as a rational subject of belief or desire. A being that could not show up as exhibiting this skill would have failed to exhibit agential abilities.
This buys us local correlationism: an entity whose overt behaviour would be unintelligible the light of normative assessments wouldn’t qualify as an agent! However, for Robert Brandom, Dennett’s intentional stance approach gives us a very sparse and incomplete picture of what it is to be an agent because it is widely applicable to systems like Maze-running robots or fly-catching frogs, thermostats, or written texts, whose intentionality seems observer-relative rather than intrinsic to the observed system. Most obviously, it fails to account for the capacity that allows intentional systems to show up as such: namely the capacity to interpret. For Brandom, as for Davidson, intentionality and real agency require understanding as well as the ability to be understood; and this requires the capacity to interpret verbal behaviour and actions in the light of reasons.
For both philosophers, one of the conditions for such understanding is that both interpreter and interpretee have a structured language. Davidson presents a particularly terse argument for this connection:
Belief is an attitude of “holding” true some proposition: for example, that there is a cat behind that wall. Thus a true believer must have a grip on the concepts of truth and error. It follows that only those with a concept of belief can have beliefs. We cannot have a concept of belief without exercising it. Thus we cannot believe anything without the capacity to attribute to others true or false beliefs about common topics (Davidson 1984: 170; 2001b: 104).
This, in turn, requires a language. For beliefs and thoughts can only be interpreted by those who can compare their take on a topic with those held by the interpretee. Language affords this theatre of perspectives; expressing facts about things and semantic facts about how things are referred to or represented. It makes explicit that “one can want to be the discoverer of a creature with a heart without wanting to be the discoverer of a creature with a kidney” (Davidson 1984: 163).
Interpretation requires “a coherent pattern in the behaviour of an agent” – between what agents do, believe or express and the conditions under which action and expression occurs (Davidson 1984,: 159). Were agents systematically duped or confused about the world, this pattern would be lacking; their behaviour would reveal nothing about what they wanted to say, what they believed or desired. Not only is this rapport a condition of interpretation, so is the presupposition that it obtains. To have a concept of belief that I can apply in the second person or the first, I must understand or see the other as engaging with things that I am or could be cognisant of. The (in)famous principle of Charity just is the assumption of shared cognisance. This is not an ethical embrace of cultural otherness, then, but another way of expressing the pragmatist idea that mentality is the ability to engage with the world in a rationally evaluable way.
The assumption of charity is only possible,, then, if the interpretee is assumed to live among and think about commonly identifiable things. Understanding that you might have true or false beliefs about things I have beliefs about requires that I locate us a shared field of actual or possible topics. As Davidson puts it again:
Communication depends on each communicator having, and correctly thinking that the other has, the concept of a shared world, an intersubjective world. But the concept of an intersubjective world is the concept of an objective world, a world about which each communicator can have beliefs. (Davidson 2001, 105)
For Davidson, and for pragmatists more generally, the ability to interpret and be interpreted in turn is a condition of intentionality and thus agency. But this requires both that each agent understand the other to belong to a shared world. Moreover, it requires that there be such a world – in some sense: absent this condition, there would be nothing to interpret.
But what is this idea of a shared world an idea of? Under what conditions can two creatures be said to belong to one?
Davidson, D. 1984. Inquiries into Truth and Interpretation. Oxford: Clarendon Press.
____1986. “A Nice Derangement of Epitaphs”. In Truth and Interpretation, E. LePore (ed.), 433–46. Oxford: Blackwell.
____2001a. Essays on Actions and Events, Vol. 1. Oxford: Oxford University Press.
____2001b. Subjective, Intersubjective, Objective, Vol. 3. Oxford: Oxford University Press.
In Posthuman Life I define the posthuman in terms of the disconnection thesis (DT). One of the advantages of DT is that it allows us to understand human-posthuman differences without being committed to a “human essence” that posthumans will lack. Rather, we understand the human (or WH, the “wide human”) as an assemblage of biological and non-biological individuals, whose history stretches from the world of Pleistocene hunter-gatherers to the modern, interconnected world, and perhaps beyond. Thus it avoids the accusation that we can render the hypothesis of that there could be posthumans (speculative posthumanism AKA SP) meaningless by denying, or deconstructing the claim that there is a human essence – a set of necessary conditions for being human.
However, DT is in tension with the thought of the radical alien discussed in the preceding post. The problem, again roughly, is that claims about the radical alien seem to imply that the alien is not just difficult to understand – the kind of understanding that could be achieved with time, sweat and ingenuity – but remains beyond human understanding in principle. But this implies that at least one necessary proposition is true of humans – namely that for any radical alien, they would be incapable of understanding it.
Thus there can be radical aliens only if there is (after all) a human essence.
DT does not require that there is no human essence. It is merely consistent with its denial. But I have independent reasons for thinking that there are no necessary cognitive constraints inherent in human understanding. Suppose that there is some kind of human essence and that part of this includes the inability to understand certain radical aliens. It follows that open sentence that the relation term “…. understands R” where R refers to some radical alien, is necessarily false of all humans.
However, this only constitutes a real constraint on humans if each human is necessarily human, that is if there is a necessary limit on the way the cognitive powers of agents could be altered. Maybe there are such limitations, but it seems that either they are knowable a posteriori or a priori. If a posteriori, we need evidence for them. It is not clear that there is such evidence around, or what form it might take. Thus there are reasons for being sceptical here.
Suppose such constraints are the a priori kind buttressed and formulated in transcendental philosophies – e.g. Husserlian phenomenology and some accounts of Kantian philosophy – e.g. the analytical Kantianism associated with thinkers such as Sellars and Brandom.
What these positions have in common is the claim that there are invariant conditions for thought and intelligibility. Here what is at issue is the intelligibility of agents. In the case of phenomenology, the condition is that an agent is embodied in a world shared by humans whose actions and experiences can be understood as directed towards that world. In the case of analytic Kantianism, the condition is similar: the agent’s activity must be interpretable in terms of a set of inferential or practical commitments.
These commitments are social statuses whose content is expressed in the sentences of an interpreting idiom or “metalanguage”. This also presupposes a shared world since this content can only be articulated where enough of the statuses are elicited or prompted by things or states of the world which can be identified by prospective interpreters. In the absence of such referents interpretative idioms would be (as Davidson argues) untestable and lack the non-inferential component required for any plausible inferentialist account of content.
A radical alien would not belong to the set of beings whose agency can – in Davidsons metaphor – be triangulated by reference to a common world. Its agency would be perpetually occult to humans. By the same token it could not belong to the common world of the phenomenological account. It would be a closed book. But here we seemed to be locked in a contradiction.
- The radical alien would not belong to the class of beings whose behaviour can be interpreted as actions.
- The radical alien would be an agent.
- An entity whose behaviours could not be construed as actions, even in principle, would be a non-agent.
After all, where else does our concept of agency get its content than its attribution to the things we could treat as agents in principle?
So 1), 2) and 3) are inconsistent. A paradox! However, we can defuse the paradox by denying 3. 3) implies that a kind of local correlationism for agency. The only kinds of things that could count as agents are those that are amenable to human practices of interpretative understanding, whatever these may amount to. 3) denies the possibility that there could be evidence-transcendent facts about agency such procedures might never uncover.
Have we good reason to drop 3 – other than to avoid the paradox. Yes, I think so – and have argued this at some length elsewhere. We only have to deny that there is some a framework corresponding to the interpretable as such.
And this, of course, is in line with anti-essentialism with regard to the human. If there are no de re modal facts concerning what is possibly (or not-possibly) interpretable, there is no thing such that it is either possibly-interpretable or not possibly-interpretable for us or for creatures relevantly alike. Thus, whatever belongs to the class of agents it is not delineated by any practices of intersubjective interpretation. Another way of putting this is that the concept of agency cannot be totalised. There is no collection of all possible agents.
Thus our concept agent is – in a sense – empty or void. When we speak of agency in the abstract we are not using concepts with which we have an existing, if implicit, mastery. However, it follows that our concept of the radical alien is similarly void. We thought that it must transcend the field of the interpretable. But if, as I’ve suggested, there is no such field, there are no radical aliens if these are understood in the interpretation transcendent sense.
But then what of the intimations of the alien in Lovecraft, Wells and other thinkers? Does my use of idea of the radical alien involve a kind of misprision? In my next post I will argue it does not, but only if we re-interpret the otherness or difference of the alien in aesthetic terms rather than in terms of some metaphysics of agency.
Billions of years in the future, The Time Traveler stands before a black ocean, under a bloated sun. The shore is scaled with lichen and flecked with snow. The crab things and giant insects that menaced him on his visit millions of years in its past are gone. Apart from the lapping of dark waves, everything is utterly still.
He thinks he sees something shifting in the waves nearby but dismisses it as an illusion; assuming it to be a rock. Still a churning weakness and fear deters him from leaving the saddle of the time machine. Perhaps this anxiety is just prompted by the ultimate desolation of this world.
Studying the unknown constellations, he feels a chill wind. The old sun is being eclipsed by the moon, or some other massive body – for it is possible that the Earth has shifted into a new orbit around its star.
Twilight segues to black. The wind moans out of utter darkness and cold. A deep nausea hammers his belly. He is on the edge of nothing. Then the object passes and an an arc of blood opens the sky.
And by it he sees what moves in the water: “It was a round thing, the size of a football perhaps, or, it may be, bigger, and tentacles trailed down from it. It seemed black against the weltering blood-red water, and it was hopping fitfully about.”.
He is terrified of passing out, with the thing waiting for him in the shallows. He recedes back into the past. The familiar contours of his laboratory swim into being around him.
During the Traveler’s brief acquaintance with it, the thing appears devoid of purpose. Its “flopping” motion might be due to the action of the waves. It might lack a nervous system, let alone a mental life replete with beliefs and desires. But his acquaintance with it is brief, after all, and he knows nothing of it or its world. If it can be said to have one.
It is tempting to suggest alternative scenarios in which the Traveler does not retreat from the thing in the water and remains to study it (and perhaps be studied in turn).
He might find that it is a traveler from some even deeper future, or the representative of an extra-terrestrial culture. Perhaps observation and autopsies would reveal it to be an offshoot of modern Cephalopoda, trawling the desultory shoreline for bite-sized crustaceans.
Again, a Lovecraft-Wells crossover might cast it as the baleful representative of ultimate cosmic evil. Perhaps it locks the Traveler out of his own body, storing his mind like a living fossil. Then it sits in the saddle and return to the present, where, sooner or later, it begins to eat our history.
These narrative possibilities are forestalled, however. Within Well’s fictional world the the nature of the creature remains, undetermined and thus indeterminable. Readers of the Time Machine can only imagine the Traveler’s presentiment on encountering it; wonder why he finds the thought of being near it so terrible. The creature remains hidden, its meaning held in a perpetual tomb.
Given time and effort, radical interpretation might unveil the the obscurities of merely unfamiliar languages or forms of life. But radical aliens would remain obdurately outside thought. In Western traditions, the idea is commonly expressed in apophatic mysticism that treats the divine as an ineffable and unthinkable other. In apophasis, this reality is expressed by what Eugene Thacker calls a “misanthropic subtraction” in which words are stripped of any positive signification so as to hint at a transcendence beyond words (Thacker 2015, p. 140).
The arrest of narrative has a similar effect to the language of mysticism, since, in fiction, the undescribed must remain unknown outside the limits of our encounter with it. Most evocations of the radical alien exhibit a form of arrest: from the work of H P Lovecraft and William Hope Hodgson to that of the “New Weird” authors like Thomas Ligotti or Jeff Vandermeer, to the far future science fiction of Hannu Rajaniemi and Charles Stross.
As Graham Harman observes, Lovecraft’ uses a range of literary devices to subtract the legibility of his cosmic deities, the Great Old Ones. This can occur via radical metaphor – for example, “The Dreams in the Witch House” Azathoth, is said to lie “at the centre of ultimate Chaos where the thin flutes pip mindlessly”. The content of this description undermines its metaphorical aptness since ultimate chaos would be the decentering of centres. The “thin flutes” should then be understood as “dark allusions to real properties of the throne of Chaos, rather than literal descriptions of what one would experience there in person” (Harman 2012: 36-7).
The adjective “mindless” does not imply here that this reality is simply non-mental, like the spontaneous production of particle/anti-particle pairs. Rather that conceptions like mindedness or agency are not being applied to the reality in which they carry their usual implications. Recall, the ungainly flopping of Wells’ creature. Is this a sign of its diminished sentience, mute heteronomy before the waves; or of something that is no less a power in the world than us but fundamentally unlike us?
When the sailor Johansen describes an encounter with Lovecraft’s amorphous tentacled god near the end of “The Call of Cthulhu” he must vitiate his own description:
“Of the six men who never reached the ship, he thinks two perished of pure fright in that accursed instant. The Thing cannot be described–there is no language for such abysms of shrieking and immemorial lunacy, such eldritch contradictions of all matter, force, and cosmic order. A mountain walked or stumbled.”
Likewise, the dread and physical abjectness related by the Traveler are not attributable to anything he has described; their presence in his account hollows it out without giving us the missing outline. They are prompted by something unmentioned, something perhaps unutterable, which can only be conveyed indirectly through its pernicious effect on the observer.
Wells and Lovecraft, then, both employ discrepant figures or elisions to “refer” to the unknowable and unsayable. Derrida has argued that philosophy is also in the grip of such undeterminable or undecidable tropes, where, for example, a term like “the sun” is used by Plato in Republic IV-VII to refer to the origin of intelligibility itself. Within the terms of Plato’s text there is no criterion of metaphorical aptness that tells us whether this is a “successful” metaphor for the ultimate Good, other than the account in which it already figures. Such radical metaphors constitute an ellipsis of meaning – a solar “eclipse” whose divorce from settled semantic domains free up metaphors to play elsewhere as metaphysical concepts (Derrida 1974: 53-4).
Philosophical concepts are conceptually articulated in ways that distinguish them from the literary use of catachresis in Lovecraft, or in a very different context, J G Ballard’s Crash or his novella “Myths of the Near Future”. There is a good deal be said about Plato’s form of the good; whereas Lovecraft provide no science or metaphysics to limn the ultimate reality of Azathoth; while Ballard’s ontology of the automobile collision is entirely exhausted by its place within Crash’s circuit of auto-destructive desire (Roden 2002). Still, this does not mean that allusion to unknowable entities in Wells, Lovecraft and others is without philosophical significance.
Firstly, both reject something that Platonic philosophy shares with apophatic theology – the jargon of transcendence. Lovecraft’s apophatic method discloses a dark, unknowable cosmos that is, however, devoid of transcendence. The Azathothic other is not beyond or “higher” than matter but intimately involved and active in a unitary, if ultimately chaotic and meaningless, universe.
Wells’ being on the shoreline is alive, even if its status as an agent is left entirely open. Both, then, imply something about what it is to live in a reality that is outside thought, autonomous with respect to it, even if not transcendent or spiritual.
This is connected, secondly, to the relationship between time and sensibility – in the aesthetics of an encounter that pre-empts any articulation of its nature (Sullivan 2010: 197). An encounter that need harbour no meaning, no “fore-having” waiting to be glossed by the phenomenologist, for example. The phenomenology of the encounter can be dark, as I have argued elsewhere. It can be had, without being further accessible through description or philosophical hermeneutics.
The radical alien can be encountered, then, but the encounter breaks the orderly procession of historical time and knowledge production. It leaves its mark in irreducible affects – terror, madness and physical desolation.
Derrida, J. and Moore, F.C.T., 1974. White mythology: Metaphor in the text of philosophy. New Literary History, 6(1), pp.5-74.
Harman, G., 2012. Weird realism: Lovecraft and philosophy. John Hunt Publishing.
Roden, D., 2003. Cyborgian subjects and the auto-destruction of metaphor.Crash cultures: modernity, mediation and the material, pp.91-102.
O’Sullivan, S., 2010. From aesthetics to the abstract machine: Deleuze, Guattari and contemporary art practice. Deleuze and contemporary art, pp.189-207.
Thacker, E., 2015. Tentacles Longer Than Night: Horror of Philosophy. John Hunt Publishing.
A provisional abstract for my presentation at the Questioning Aesthetics Symposium in Dublin, 12-13 May,
Speculative Posthumanism (SP) claims that there could be posthumans: that is, powerful nonhuman agents arising through some technological process. In Posthuman Life, I buttress SP with a series of philosophical negations whose effect is to leave us in the dark about these historical successors (Roden 2014). In consequence, SP confounds us in moral and epistemic darkness. We lack rules specifying the nature of the posthuman or how to recognise it. We do not know what we are becoming; and lack any assurance that our moral conceptions can travel into the future(s) we are complicit in producing.
I argue that the void delineated by speculative posthumanism implies that aesthetics is the first philosophy of the value domain, for it forces us to judge itineraries in posthuman possibility space without criteria. Art practices that engage with technological change thus supply a political model for pursuing and organizing trajectories into the future: one distancing us from any current conception of the good or any normative appeal to universality. This estrangement or abstraction, I will claim, does not express a postmodern ethics of transgression or “transvaluation” but falls out of the ontological structure of planetary technical networks.
Roden, David. (2012), “The Disconnection Thesis”. In A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), The Singularity Hypothesis: A Scientific and Philosophical Assessment, London: Springer.
Roden, David (2013), “Nature’s Dark Domain: An Argument for a Naturalised Phenomenology”. Royal Institute of Philosophy Supplements 72: 169–88.
Roden, David (2014), Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.
Roden (forthcoming), ‘On Reason and Spectral Machines: an Anti-Normativist Response to Bounded Posthumanism’. To appear in Philosophy After Nature edited by Rosie Braidotti and Rick Dolphijn.
And how might the return of these possibilities offer a power of resistance? The resistance of biology to biopolitics? It would take the development of a new materialism to answer these questions, a new materialism asserting the coincidence of the symbolic and the biological. There is but one life, one life only.
Biological potentials reveal unprecedented modes of transformation: reprograming genomes without modifying the genetic program; replacing all or part of the body without a transplant or prosthesis; a conception of the self as a source of reproduction. These operations achieve a veritable deconstruction of program, family, and identity that threatens to fracture the presumed unity of the political subject, to reveal the impregnable nature of its “biological life” due to its plurality. The articulation of political discourse on bodies is always partial, for it cannot absorb everything that the structure of the living being is able to burst open by showing the possibilities of a reversal in the order of generations, a complexification in the notion of heritage, a calling into question of filiation, a new relation to death and the irreversibility of time, through which emerges a new experience of finitude.
I’m ending an all too brief sojourn in Western Crete, just as Greece seems set to become Europe’s new experiment in post-democratic capitalism – its very own Interzone. Many, if not most, economists claim that the conditions cannot be met and that attempting to do so will shred Greece’s economic, social, educational and cultural life as much as the initial round of austerity.
Nonetheless, a bubble of ease is maintained here for those with euros. We who bask in the light and heat of the Aegean summer can condemn the deprivations heaped upon the Greek state and its citizens without having to experience them.
However factitious, this moment has allowed me to pause and think about some generous philosophical discussion of Posthuman Life on a number of excellent websites. These have forced me to think harder about the basic assumptions in of the book. So here begins a series of reflective responses to my commentators under the rubric of “Dark Posthumanism” – though, as shall become clear, my use of the d-word is seriously tendentious.
I should begin by citing Debbie Goldgaber’s excellent post on Speculative Posthumanism and dark phenomenology. This catalyzed an exchange between deflationary naturalists like Scott Bakker and those like Jon Cogburn or Goldgaber, who favour a deconstructive or “weird realist” construal of dark phenomena. This debate resurfaced during a lively discussion at the New Centre for Research and Practice‘s Posthuman Life 1 seminar, in which Debbie also participated. Its trenchancy was a surprise, although a welcome and productive one, which I’ll try to address in this post.
Meanwhile, the Philosophical Percolations Summer Reading group on PHL rolls on to Chapter 2 and 3 and the Ultima Thule of Unbounded Posthumanism! I should also bow to John Danaher’s fine clarificatory effort over at Philosophical Disquisitions. He has not yet addressed the role of dark phenomenology, but it will be interesting to see what he makes of it.
Scott’s interview with me over at Figure/Bound communications recapitulates similar tensions while holding me to account for the ethical commitments of the book. I think there’s a connection between the epistemological issues arising from the dark phenomenology hypothesis and the ethics and politics of becoming posthuman. These are taken up in B P Morton’s terrific piece on trans/posthumanism and transgender (also at philpercs) which I to return to in the sequel to this post.
So what’s the deal with Dark Phenomena?
On a first (and extremely shaky) approximation, there is a tension between a thin epistemological interpretation of Dark Phenomena – experiences that furnish no tacit yardstick for their description – and a weird reading that I hesitate to term “ontological”, since its presuppositions seem more difficult to articulate than the naturalist side.
On the epistemological reading, the dark side is a placeholder for structures of experience that phenomenology cannot elucidate without the help of science – in particular, psychology, neuroscience or cognitive science. Dark phenomena reveal the point at which the putative domain of phenomenology eludes the scrutiny of philosophical method. It does not imply any obscurity in principle, since what may elude phenomenology may be explicated in other terms.
On the weird (horror?) reading, the dark side must be understood via its disintegration or truncation of the subject: experiences of horror, alienation, humour or compulsion such as the spectral thing that, for Levinas, depersonalises the consciousness of the insomniac. As Cogburn points out, these incursions and eruptions in experience can be related to the late Idealist view that our experience of embodiment provides privileged insight into a pre-subjective Nature (Schelling) or a noumenal body that eludes representation. I think Eugene Thacker’s discussion of Schopenhauer in his book Starry Speculative Corpse captures the latter idea particularly well:
The Will is, in Schopenhauer’s hands, that which is common to subject and object, but not reducible to either. This will is never present in itself, either as subjective experience or as objective knowledge; it necessarily remains a negative manifestation. Indeed, Schopenhauer will press this further, suggesting that “the whole body is nothing but objectified will, i.e. will that has become representation” (122-3)
So darkness on the naturalist reading is a local problem for phenomenological method, whereas on the weird reading it is an obscure disclosure (“negative manifestation”) of something (some thing) that resists any form of representation or theory. It must also be contentless if it is to do the work of undercutting the claims of transcendental conceptions of the subject, whether phenomenological, existential or pragmatist.
So far this seems as if it might be almost consonant with Bakker’s take on dark phenomenology. As he writes in his commentary on Goldgaber, phenomenology qua method:
assumes we have a reflectively accessible experiential plenum to begin with, that we actually possess a ‘phenomenology’ worth the name. The problem, in other words, is that we have no way of knowing just how impoverished our ‘phenomenology’ is in the first place.
If phenomenology is dark then phenomenological method is at best incomplete and at worst benighted. For example, experienced temporality is as transcendent and inaccessible to us as the structure of matter. Phenomenology can never be more than a descriptive science of nature according to this account and should not aspire to a priori status since there is no good reason to think that its descriptions are authoritative. There are good empirical reasons for thinking that we take our judgements about the contents of our minds or experiences to be based on an unmediated givenness only because we are not mindful of the heavy lifting required to produce them. If phenomenology is dark we are, as Bakker implies, in the dark about the dark.
The weird reading might now seem a little shady. Even the metaphor of darkness is misleading if it implies a phenomenology of the “gaps in presence”. This would be feasible only if we already knew the structure of the plenum and (or so the argument goes) there is no good reason to think that we do.
This seems to warrant a cautious analogy between the thesis that there is a dark side to phenomenology and Derridean deconstruction, which, though drawing on the language of phenomenology, cuts it free of any secure domain by generalizing subjective temporality well beyond anything conceivable as a subject, to the iterable mark, to generalized writing etc. (PHL: 94).
Goldgaber imputes to me the claim that this structure, at least, is generalizable beyond the human:
were it possible to show that there are dark elements in our own phenomenology, experienceable but not amenable to description or interpretation, we would have grounds, Roden thinks, for understanding human subjectivity in terms of both its unity and radical difference or rupture from world–as dependent on structures that are shared by nonhumans.
I’m not sure that I go this far. I suspect a purer Derridean like Martin Haggelund might. But, like Bakker, I don’t see any reason to see why such claims are on securer ground. Their virtue is salutary rather than informative; exposing the indeterminacy of claims about structure of worldly agency and time.
On the other hand, once we take dark phenomenology (or Bakker’s blind brain theory) as serious epistemological proposals we seem confronted with a darkness without negation, not one contrary to the light side (which, by hypothesis, is already striated with it). And here one is almost tempted to say that harder-than-hard naturalism bites the tail of mysticism. In Speculative Corpse, Thacker distinguishes a metaphysical correlation (between thought and object) presupposed by philosophy from a mystical correlation that can only verify itself by breaking against an impersonal “divine” darkness (84-5) that can never be recuperated by thought. A similar failure of correlation seems to obtain here. Even the tools (concepts like plenum) with which we are attempting to think the absence of a proper topic for phenomenology have to fail us. A thought that reiterates its failure in this way obeys the logic of the mystical as Thacker describes it.
So while we may not have any knowledge of what we could share with unboundedly weird posthumans, or nonhumans of other stripes, we led into a defile that is boundless on either reading. Perhaps the deflationary reading is as weird as it gets. Perhaps as Bakker puts in Neuropath, we are all already “vast and terrible with complexity” . As the tagline to the novel states: you do not know what you are. You do not know what it is that does not know this. We do not know where the darkness ends, how far it extends. And perhaps it is this pervasive boundlessness that can provide a tentative opening beyond the human, freeing us, as Morton might say, to explore the near inhuman, the trans of alterable bodies and desires.
Or maybe this is too quick! It’s easy to make imaginary progress in a frictionless milieu. I’ll return to Morton in Dark Posthumanism II.
There’s a lively debate around Scott Bakker’s recent lecture: “The End of the World As We Know It: Neuroscience and the Semantic Apocalypse” given at The University of Western Ontario’s Centre for the Study of Theory and Criticism here at Speculative Heresy. The text includes responses from Nick Srnicek and Ali McMillan.