In the philosophy of technology, substantivism is a critical position opposed to the common sense philosophy of technology known as “instrumentalism”. Instrumentalists argue that tools have no agency of their own – only tool users. According to instrumentalism, technology is a mass of instruments whose existence has no special normative implications. Substantivists like Martin Heidegger and Jacques Ellul argue that technology is not a collection of neutral instruments but a way of existing and understanding entities which determines how things and other people are experienced by us. If Heidegger is right, we may control individual devices, but our technological mode of being exerts a decisive grip on us: “man does not have control over unconcealment itself, in which at any given time the real shows itself or withdraws” (Heidegger 1978: 299).
For Ellull, likewise, technology is not a collection of devices or methods which serve human ends, but a nonhuman system that adapts humans to its ends. Ellul does not deny human technical agency but claims that the norms according to which agency is assessed are fixed by the system rather than by human agents. Modern technique, for Ellul, is thus “autonomous” because it determines its principles of action internal to it (Winner 1977: 16). The content of this prescription can be expressed as the injunction to maximise efficiency; a principle overriding conceptions of the good adopted by human users of technical means.
In Chapter 7 of Posthuman Life, I argue that a condition of technical autonomy –self-augmentation – is in fact incompatible with technical autonomy. “Self-augmentation” refers to the propensity of modern technique to catalyse the development of further techniques. Thus while technical autonomy is a normative concept, self-augmentation is a dynamical one.
I claim that technical self-augmentation presupposes the independence of techniques from culture, use and place (technical abstraction). However, technical abstraction is incompatible with the technical autonomy implied by traditional substantivism, because where techniques are relatively abstract they cannot be functionally individuated. Self-augmentation can only operate where techniques do not determine how they are used. Thus substantivists like Ellul and Heidegger are wrong to treat technology as a system that subjects humans to its strictures. Self-augmenting Technical Systems (SATS) are not in control because they are not subjects or stand-ins for subjects. However, I argue that there are grounds for claiming that it may be beyond our capacity to control.
This hypothesis is, admittedly, quite speculative but there are four prima facie grounds for entertaining it:
- In a planetary SATS local sites can exert a disproportionate influence on the organisation of the whole but may not “show up” for those lacking “local knowledge”. Thus even encyclopaedic knowledge of current “technical trends” will not be sufficient to identify all future causes of technical change.
- The categorical porousness of technique adds to this difficulty. The line between technical and non-technical is systematically fuzzy (as indicated by the way modern computer languages derived from pure mathematics and logic). If technical abstraction amplifies the potential for “crossings” between technical and extra-technical domains, it must further ramp up uncertainty regarding the sources of future technical change.
- Given my thesis of Speculative Posthumanism, technical change could engender posthuman life forms that are functionally autonomous and thus withdraw from any form of human control.
- Any computationally tractable simulation of a SATS would be part of the system it is designed to model. It would consequently be a disseminable, highly abstract part. So multiple variations of the same simulations could be replicated across the SATS, producing a system qualitatively different from the one that it was originally designed to simulate. In the work of Elena Esposito a related idea is examined via the way users of financial instruments employ uncertainty as a way of influencing the decisions of others through one’s market behaviour. Esposito argues that the theories used by economists to predict market behaviour are performative. They influence economic behaviour though their capacity to predict it is limited by the impossibility of self-modelling (Esposito 2013).
If enough of 1-4 hold then technology is not in control of anything but is largely out of our control. Yet there remains something right about the substantivist picture, for technology exerts a powerful influence on individuals, society, and culture, if not an “autonomous” influence. However, since technology self-augmenting and thus abstract it is counter-final – it has no ends and tends to render human ends contingent by altering the material conditions on which our normative practices depend.
Esposito, E., 2013. The structures of uncertainty: performativity and unpredictability in economic operations. Economy and Society, 42(1), pp.102-129.
Ellul, J. 1964. The Technological Society, J. Wilkinson (trans.). New York: Vintage
Heidegger, M. 1978. “The Question Concerning Technology”. In Basic Writings, D. Farrell
Krell (ed.), 283–317. London: Routledge & Kegan Paul.
Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London:
Winner, L. 1977. Autonomous Technology: Technics-out-of-control as a Theme in Political
Thought. Cambridge, MA: MIT Press.
Here’s Scott Bakker with a most eloquent statement of his pessimism about our far technological, posthuman prospects:
Brain science. The reason why I fear that ‘cognitive augmentation’ will be catastrophic turns on the way I see psychology and neuroscience slowly confirming what I think should be humanity’s greatest scientific fear: the possibility that meaning and morality are simply figments of our neural parochialism. If this is the case, it means the very frame of reference that Marone uses to value ‘biohacking’ will in fact be one of the first casualties of biohacking.
It’s a response to Rachel Marone’s post on biohacking over at Memetics here.
If, as Scott urges, the posthuman is the postsemantic and (and we agree that having a semantics is a good thing) then becoming posthuman is bad in at least one respect.
I’m not sure that I agree with Scott’s assumption that meaning is an artifact of our ‘neural parochialism’. Certain aspects of our phenomenology might well be. For example, I take it that one of the implications of Metzinger’s account is that Dasein is a kind of online hallucination. In any case, Metzinger’s is a representationalist philosophy and representations have representational content, so it is hard to formulate his position without presupposing that meaning is real (if meaning is an illusion there must be at least one meaningful thing – namely the illusion of meaning). If you want to say that all meaning is an illusion you need to explain how the misrepresentation of meaning is possible without representation.
But this aside, I guess that I want to urge far more capacious sense of the posthuman. This is partly for ethical reasons. To misquote Iain H. Grant: I just can’t believe that fourteen billion years of cosmic evolution occurred so we can have this chat. To quote my own plodding formulation “[We] know that Darwinian natural selection has generated novel forms of life in the evolutionary past since humans are one such. Since there seems to be nothing special about the period of terrestrial history in which we live it seems hard to credit that comparable novelty resulting from some combination of biological or technological factors might not occur in the future.” We may not currently be an position to evaluate our potential successors but I can’t see why they should not possess analogues of our own conceptions of the good, even if they are currently inconceivable for us, even if they transcend our powers of comprehension in much the way that the idea of number theory exceed the cognitive grasp of non-uplifted rat.
So I’ve urged a deliberately schematic and anti-essentialist conception of the posthuman in two parts. The first part recursively specifies a relation of wide human descent. The important thing here is that wide human descendants need not be humans.
An entity is a wide human descendant if it is the result of a technically mediated process:
A) Caused by a part of WH (WH stands for the “Wide Human” – aka the human socio-technical network) where the ancestral part may be wholly biological, wholly technological or some combination of the two
B) Caused by a wide human descendant.
A is the “basis clause” here. It states what belongs to the initial generation of wide human descendants without using the concept of wide descent. B is the recursive part of the definition. Given any generation of wide human descendants it specifies a successor generation of wide human descendants.
It is important that this definition does not imply that a wide human descendant need be human in either wide or narrow senses. Any part of the human socio-technical network ceases to be widely human if its wide descendants go “feral”: acquiring the capacity for independent functioning and replication outside the human network.
Becoming posthuman would thus be an unprecedented discontinuity in the hominization process. Human life has undergone revolutions in the past (like the shift from hunter-gatherer to sedentary modes of life) but no part of it has been technically altered so as to function outside of it.
A being is a posthuman WHD if it breaks out of the human network. If toothbrushes got smartened up and became sufficiently autonomous to reproduce without having to be teeth-cleaners, to devise their own ends, they would cease to be wide humans and become posthumans. A posthuman is any WHD that goes feral; becomes capable of life outside the planetary substance comprised of narrow biological humans, their cultures and technologies.
This formulation leaves the value and worth of the posthuman open. Since we cannot evaluate the posthuman ex ante, we can only assess its value by exploring posthuman design space for ourselves. This is where Rachel’s biohacking manifesto comes into its own, I think, for it questions who gets to decide the shape of the posthuman – military corporate systems, venture capitalists, or you and me?