The Posthuman: Differences, Embodiments, Performativity
Call For Papers
September 11th – 14th 2013, University of Roma 3, Rome, Italy
The University of Roma 3, the University Erlangen-Nürnberg,
the University of the Aegean and Dublin City University
are pleased to announce:
The 5th Conference of the Beyond Humanism Conference Series
The specific focus of the Conference “The Posthuman: Differences, Embodiments, and Performativity” will be the posthuman, in its genealogies, as well as its theoretical, artistic and materialistic differences and possibilities. In order to guarantee a systematic treatment of the topic, we will particularly focus on the following themes:
1 What is the posthuman? Have humans always been posthuman? If so, in which sense? Is the posthuman a further evolutionary development of the human being? What are the implications of gender, sex and race, among other differential categories, for the embodied constitution of the posthuman? Do posthumans already exist? What is the difference between the posthuman, the transhuman, the antihuman and the cyborg?
2 Philosophical issues concerning the genealogies of the posthuman: Which traditions of thoughts are significant to the posthuman theoretical attempt to postulate a post-dualistic and post-essentialist standpoint? What are the differences between the genealogies of the posthuman and of the transhuman? What points do they hold in common? Is the posthuman a Western-centric notion? Could non-dualistic practices such as shamanism be accounted as posthuman?
3 Bioarts, Body Art, Performance Art and the Posthuman: Which kind of art can be seen as leading towards the posthuman? Is the notion of the posthuman traceable in artistic traditions which precede the coining of the term “posthuman”? Can the posthuman be detected in cultures which have not been canonized by Western aesthetics?
4 Ethics, Bioethics, and the Moral Status of the Posthuman: Does the posthuman lead to a new, non-universalist, non-dualist understanding of ethics? Will posthumans have the moral status of a post-person, or will it be possible for them to have human dignity and personhood? Are human rights necessarily humanistic, or can they be re-enacted within a posthuman frame?
5 Emerging Technologies and the Posthuman: Which technologies represent the most significant challenge concerning the concept of the human/posthuman? Are restrictive national regulations concerning emerging technologies helpful in a globalized world? Do mind-uploading, plastic surgery, and cyborgian practices dissolve the border between human beings and machines? Human enhancement is already happening: should morphological freedom be regulated by social norms, or should it stand on individual choices?
6 Materialism and Posthuman Existence: The notion of matter as an active agent has been reinforced through Quantum Physics, on a scientific level, as well as by New Materialisms and Speculative Realism, on a philosophical level. Is the posthuman grounded in a materialist understanding of existence? What are the ontological, as well as the existential implications of the relationality of matter? Can it be related to a Posthuman Agency? What would a Posthuman Existentialism imply?
7. Posthuman Education: The notion of education in a posthumanist world; the transformation of the roles of teachers and learners in a posthuman social environment; what is the concept of a post- and transhumanist school? Which learning activities are central in a posthumanist educational system? Epistemological considerations about knowledge construction in the posthumanist era need to be considered further.
Papers will be selected and arranged according to related topics. Equal voice will be given, if possible, to presentations from the arts, humanities, sciences, and technological fields.
Major areas of interest include (in alphabetic order):
Animal Studies, Antihumanism, Heritage and the Arts, Postmodernism, and Conceptual Art, Bioarts and Performance Art, Bioethics, Cosmology, Critical Race Studies, Cultural Studies, Cyborg Studies, Deconstructionism, Disability Studies, Ecology, Informatics, Emerging Technologies and Ethics, Enhancement, Evolution, Existentialism, Gender Studies, Intersectionality, New Materialisms, Philosophy, Physics, Posthumanism, Quantum Physics, Science and Technology Studies, Singularity, Spirituality, Speculative Realism, Transhumanism
Other possible topics include, but are not limited to:
· Bioethics, bioconservatism, bioliberalism, enhancement
· Posthumanist anthropology, aesthetics, ecology, feminism, critical theory
· Representation of human performance in technology and the arts
· Enhancement and political discourse, regulation, and human rights
· Humanism, posthumanism, transhumanism and antihumanism in philosophy
· Poststructuralism, postmodernism, and posthumanism
· New Materialisms, speculative realism and quantum physics
· Existentialism, relational ontology, posthuman agency
· Transhuman and posthuman impact on ethics and/or value formation
· Phenomenology and postphenomenology
· Embodiments and identity
· Transhumanism and/or posthumanism in science fiction and utopian/dystopian literature
· Non-dualism in spiritual practices, mysticism and shamanism
· Globalization and the spread of biomedicine and transhumanism/br> · Economic implications of transhumanist projects
· Popular culture and posthumanist representations
· Theology, enhancement, and the place of the posthuman
· Technology, robotics, and ethics
· Cybernetics, artificial intelligence, and virtual reality
· Cyborgs and democracy
· Humanity, human nature, biotechnology
SUBMISSIONS & DEADLINES
We invite abstracts of up to 500 words, to be sent in MS Word and Pdf format to: firstname.lastname@example.org
Files should be named and submitted in the following manner:
Submission: First Name Last name.docx (or .doc) / .pdf
Example: “Submission: MaryAndy.docx”
Abstracts should be received by May 15th 2013.
Acceptance notifications will be sent out by June 15th.
All those accepted will receive information on the venue(s), local attractions, accommodations, restaurants, and planned receptions and events for participants.
*Presentations should be no longer than 20 minutes. Each presentation will be given 10 additional minutes for questions and discussions with the audience, for a total of 30 minutes.
FEES & REGISTRATION
A reduced registration fee of €50 (65USD) will apply to all participants.
SERIES “BEYOND HUMANISM”(site)
The Conference is part of the Series “Beyond Humanism”. The 1st Conference took place in April 2009 at the University of Belgrade (Humanism and Posthumanism), the 2nd Conference in September 2010 at the University of the Aegean (Audiovisual Posthumanism), the 3rd Conference in October 2011 at Dublin City University (Transforming Human Nature) and the 4th Conference in September 2012 at the IUC in Dubrovnik (Enhancement, Emerging Technologies and Social Challenges). This year, the conference “The Posthuman: Differences, Embodiments, and Performativity” will be held at the University of Roma 3, Department of Philosophy, Rome, Italy, from the 11th until the 14th of September 2013.
Charles Stross’ science fiction novel Accelerando provides a vivid and blackly funny portrayal of a transition from a merely transhuman to a genuinely posthuman world.
In Accelerando, the Singularity has arrived by the 22nd Century (Vinge 1993). The self-improving AI’s that now run the world are “wide human descendants” of human corporations and automated legal systems, which achieved both sentience and a form of legal personhood back in the 21st. As Stross’ narrator observes, the phrase “smart money” has taken on an entirely new meaning.
Eventually, these “corporate carnivores” – known as the “Vile Offspring” – institute a new economics (Economic 2.0) in which supply and demand relationships are computed too rapidly for those burdened by a “narrative chain” of personal consciousness to keep up. Under Economics 2.0 first person subjectivity is replaced “with a journal file of bid/request transactions” between autonomous software agents. E 2.0 is so remorselessly efficient that it comes to dominate not only the Earth but also the majority of the solar system. Whole planets pulverized and diverted to fast-thinking dust clouds of smart matter “blooming” around the sun (Stross 2006, 208-10):
This post-singularity scenario certainly seems bad for humans. Even their souped-up transhuman offspring prove equally incapable of functioning within E 2.0 and can only flee to the outer solar system and beyond as their worlds are “ethnically cleansed”.
At the same time, it is not clear that E 2.0 is really “good” for posthumans in a way that might conceivably outweigh its bad impact on humans.
If the posthuman entities – such as the Wide Descendents eating up the inner solar system of Stross’ novel – lack a linear, narrative consciousness, can their form of existence be worthy of ethical consideration?
Well, it might be argued that any being with conscious awareness – even one that does not involve rational subjectivity or personhood – is worthy of some moral consideration. Most accept that nonhuman animals are conscious of pains and pleasures and it is plausible to argue that their interest in avoiding pains and having pleasures are identical to humans.
However, many humanists claim that the reasoning prowess of humans distinguishes them radically from nonhuman animals. Responsiveness to reasons is both a cognitive and a moral capacity. For Kant, this capacity to choose the reasons for our actions – to form a will, as he puts it – is the only thing that is good in an unqualified way and is the most important distinguishing characteristic of humanity as opposed to animality.
Even humanists for whom the human capacity for self-shaping is one good among many, here, claim that “autonomy” confers a dignity on humans that should be protected by laws and cultivated.
Beings with the capacity for autonomy the moral status that goes with it are commonly referred to as “persons”. Locke defined a person as “a thinking intelligent being that has reason and reflection and can consider itself as itself, the same thinking thing in different times and places”. If Locke is right about the psychological preconditions for personhood, then beings such as the Vile Offspring cannot count as persons because, as Stross puts, their phenomenology lacks the “narrative centre” that a being needs to consider itself the same thing at different times. The practical rationality described in most post-Kantian conceptions of autonomy might not be accessible to a being with non-subjective phenomenology. Such an entity would be incapable of experiencing itself as having a life that might go better or worse for it.
If humanists are right to say that persons have special moral worth and we add to this the claim that there could be no nonpersons with greater or equivalent moral worth than persons, then very weird and very non-human posthumans such as Vile Offspring who lack personal phenomenology would not be as worthy of moral consideration as humans or transhumans.
Posthumans lacking personhood and the capacity for pleasure and pain would not be sources for any kind of moral claim. Posthumans lacking personhood but possessing functional equivalents of pleasure or pain could be granted an equivalent status to non-human animals that also lack the psychological prerequisites of personhood.
Posthuman singularity ethics would then be possible only in an etiolated form though it would not be applicable where our wide human descendants departed radically from human phenomenological invariants.
Perhaps it this is what accounts for the “vileness” of the Vile Offspring. That they are not conscious subjects with plans for life and conceptions of the good but churning clouds of super-intelligent matter driven by inchoate drives – like H P Lovecraft’s blind, idiot God, Azathoth.
However, this analytic of the vile is premature. For it assumes that there is a moral hierarchy mapping onto a psychological or phenomenological hierarchy. But the fact that there are beings – persons – with the distinctive mental properties described by Locke and Kant does not entail that all beings lacking these properties must be morally inferior, or even vile. For it is conceivable that there could be intelligent beings whose experience lacks some perquisites for personhood but have phenomenological attributes that are different but not morally inferior.
We humans might find it hard to conceive what such impersonal phenomenologies could be like (to say of them that they are “impersonal” is not to commit ourselves regarding the kinds of experiences they furnish). However, this difficulty may simply reflect the fact that our phenomenology constrains our grasp of phenomenological possibility and necessity (Metzinger 2004: 213; Roden 13b).
In particular, our phenomenology may be characterized by variable degrees of what Thomas Metzinger calls “autoepistemic closure”.
A phenomenology is autoepistemically closed if the processes that generate it are inaccessible within it. According to Metzinger, human personal experience is a dynamic and temporally situated model of the world, which represents the modeller as a distinct component. The phenomenal world model thus includes a phenomenal self-model or PSM. However, neither model represents the subpersonal cognitive processes that implement them. To borrow a phrase from Michael Tye: the phenomenal world-models and self-models are “transparent” – we seem to look through them into an immediately given world out there and a self-present mental life “in here” (Metzinger 2004 131, 165).
Both immediacies, according to Metzinger, are epistemic illusions generated by the model’s insensitivity to its computational underpinnings. There is no self or subject doing the looking. The experienced self is, rather, the simulated content of the PSM rather than the subpersonal process that generates it.
If, as Metzinger claims, we are not self-intimating Cartesian selves or Kantian transcendental subjects but self-models, it is little wonder that our phenomenology affords limited insight into the space of possible minds. For example, our subjectivity seems to exist in a spatial-temporal pocket: a situated, embodied self and an ever evolving present. It is characterized by a bivalent distinction between self and other, non-mine and mine and a sense of temporal newness – or presentationality – “a virtual window of presence” that gives us a baseline with which to distinguish actuality and simulated possibility (Ibid. 42, 96). But this representational scheme may depend on the fact that our sensory and motor systems are “integrated within the body of a single organism”. Other kinds of life – e.g. “conscious interstellar gas clouds” or (more saliently for us) decentred post-human “swarm” intelligences like the Vile Offspring – might have experiences of a quite different nature (Metzinger 2004: 161).
A physically distributed entity with computing power to burn might support a “multi-threaded” and “multi-level” phenomenology that tracks the adventures of distributed processing sites while providing high-resolution models of its own cognitive processes. Such a distributed consciousness might have a very different functional structure to human consciousness.
A multi-threaded phenomenology might employ different strategies for modelling relationships between the modeller and its environment. We cannot easily imagine what such a phenomenology would be like – but inability to imagine it is not a demonstration of its impossibility.
So it is at least conceivable that a nonhuman phenomenology could be impersonal, but have representational characteristics no less sophisticated than “higher order” moral properties such as autonomy in humans. If personhood and autonomy are not unique “higher-order moral properties” and we are not yet in a position to compare them with posthuman modes of being, then we have no grounds to assume that they trump other candidates for ethical consideration. So we have very weak grounds for believing that persons (or autonomous human subjects) stand at the moral summit or centre of creation.
If that is right, then a person-relativist humanist ethic should be rejected along with a species relativist one. There may be non-personal modes of existence following a singularity (or posthuman-maker) no less valuable than those accessible to persons. This is compatible with the claim that persons have some intrinsic moral worth – though it does not entail this. If this value is genuinely intrinsic it is presumably unaffected by the existence of different modes of existence with their own intrinsic worth.
I think this possibility implies a form of posthuman justice. This is not the postmetaphysical, procedural justice described by Rawls and other liberal anti-perfectionists. Posthuman justice cannot be predicated on “fair terms of co-operation” between citizens of a state since any human-posthuman disconnection would, arguably, preclude a republic of humans and posthumans (Roden 2013a).
Now, we could try to express a formal principle of justice on the basis of the assumption that there could be valuable posthuman forms of existence: for example:
We should give equivalent consideration to such modes of being, whatever they may be.
I use “equivalent” in favour of “identical” since it would be presumptive to describe a nonpersonal intelligence as having identical interests to a personal one.
However, this substitution does not achieve much. It does not tell us how these interests are equivalent or what duties might flow from the principle. As a guide to action or to life, the formal principle is not worth the pixels it is written in.
To invert Rawls’ famous disclaimer: the theory of posthuman justice is metaphysical, not political. It does not tell us what to do or how to coordinate our institutions. It just allows, (for want of countervailing arguments) that potential posthuman lives could support modes of existence that are not less than ours.
We could choose not to acknowledge these potential lives – were it possible to do so – but this refusal to acknowledge posthuman “otherness” would arguably be a kind of failure. It would be equivalent to the claim that something into which our insight is really very limited – “normal” human subjectivity and personhood – has a superior claim over the nonpersonal and potentially vile occupants of posthuman possibility space. This position might be warranted if our place in posthuman possibility space were not under consideration – e.g. if we were comparing the higher order moral properties of actual humans with actual nonhuman animals. But our attitude to our nonhuman Wide Descendants is at issue. Refusal to consider this possibility would be an intellectual failure as well as a kind of injustice.
Now, I think some would object that this capacious metaethical statement simply fails to do justice to the difficulty and danger attending an actual disconnection scenario. How, for example, could it guide us in an alien post-singularity environment of the kind described in Accelerando? There the remaining humans cannot communicate or interpret the “radically other” posthumans eating up the mass of the inner solar system (Near the end of Accelerando, the Vile Offspring start to resurrect every human who ever existed. Nobody finds out why.)
So we might concede the metaphysical principle that radically alien posthumans could merit some interpretative efforts on our part; but only if these were not futile.
No ethical principle should exhort us to act in vain, it seems. In cases where posthumans could be very radically alien, a Xenophobic Bias in favour of humans or fellow persons would appear to be the only ethical option that humans or persons could realistically pursue.
However, the idea of “radical alien” that is in play here is philosophically problematic.
Firstly, we should distinguish between kinds of alienness. The autoepistemic closure of human phenomenology may make it hard to imagine or understand some alien minds; it does not imply that such understanding is impossible.
Autoepistemic closure is not cognitive closure. The fact that our self-model does not represent itself as representational or computational does not entail that we could not acquire a theoretical grasp of its representational or computational structure – this is precisely the point of Metzinger’s work and of others working in the science of consciousness.
This argument applies generally. The fact that a being might have a very different experience of the world to ours does not entail that we could not come to understand how that experience is constituted. Nor does it entail that such beings would be uninterpretable. Ethologists and pet owners regularly apply what Dennett refers to as the “intentional stance” to nonhuman animals – cats, dogs or monkeys, say – without worrying about the minutiae of their phenomenology.
To take up the intentional stance to a system is to impute to it the beliefs and desires that it should have – given the kind of system it is – and then seeing whether its behaviour can be predicted on this basis. Dennett describes how we might apply the IS to racoons:
One can often predict or explain what an animal will do by simply noticing what it notices and figuring out what it wants. The raccoon wants the food in the box-trap, but knows better than to walk into a potential trap where it can’t see its way out. That’s why you have to put two open doors on the trap–so that the animal will dare to enter the first, planning to leave by the second if there’s any trouble. You’ll have a hard time getting a raccoon to enter a trap that doesn’t have an apparent “emergency exit” that closes along with the entrance (Dennett 1995).
The racoons’ responses to the one-door trap and its propensity to be seduced by the two door trap justifies the following interpretation of racoon mental life: that racoons have beliefs (or “beliefs”) about the numbers of doors in traps and that they are averse to traps with only one door. Thus racoons are intentional systems. This act of interpretation does not entail understanding what it is like for the Racoon to experience an aversion to one-door traps. Thus phenomenological similarity does not seem to be a necessary condition for interpreting nonhumans.
However, similarity of conceptual frameworks might be such a condition. If the racoons acted in a way that made it impossible to identify conceptual distinction such as between one and two-door traps, then this particular intentional stance interpretation would not be possible.
So could posthumans be radically alien by virtue of having concepts or conceptual schemes that no human could have?
At this point an objector might become suspicious of my talk of “alien” minds and phenomenologies, for there are well-rehearsed philosophical arguments against radically incommensurate or alien conceptual schemes or languages which give cause to be suspicious of the ‘very idea’ of the radically alien intelligences. The most famous of these is advanced by Donald Davidson in ‘On the Very Idea of a Conceptual Scheme’.
In ‘Idea’ Davidson claims that theories of conceptual incommensurability must construe conceptual schemes one of two ways: in terms of a Kantian scheme/content dualism; or a relation ‘fitting’ or ‘matching’ between language and world.
However, he argues the Kantian trope presupposes that the thing organized – experience, say – is composite in a way that affords comparison with our conceptual scheme after all (Davidson 2001a, 192). Since incommensurability implies incomparability, the propositional trope – fitting the facts or the totality of experience, or whatever – is all that is left. For Davidson, this just means that the idea of an acceptable conceptual scheme is one that is mostly true (Ibid. 194). So an alien conceptual scheme or language lights would be largely true but uninterpretable (Ibid.).
For Davidson’s interpretation-based semantics, this is equivalent to a language recalcitrant to radical interpretation. For interpretation-based semantics, to have content or meaning just is to be interpretable as having that content or meaning; whether by “native speakers” or by uninformed outsiders (“radical interpreters”) who start out with no knowledge of the idiom at all. Thus an uninterpretable conceptual scheme would not only be intelligible to “native insiders”. It would not have any content or meaning at all and thus would not be true or false of anything at all.
To re-state this in terms of the current problematic: interpretation-based semantics states that if alien posthumans had minds, they would have interpretable representational states capable of reliably tracking truths.
So Davidson’s position implies that, regardless of variations in phenomenology, there cannot be any radically uninterpretable minds: whether alien, animal or posthuman. Thus any posthuman mind should be interpretable, in principle, by any human mind. This suggests that Virnor Vinge’s concern that the singularity might take us beyond good and evil, into a world in which human ethical frameworks simply lack applicability are unfounded (Vinge 1993). Strictly speaking there could be no such thing as a radical alien.
In presenting the Davidsonian argument against radical aliens, I’ve skirted some difficult technical issues about the nature of interpretative theories: e.g. whether a theory of truth for a language can capture what a native speaker grasps when they understand the language.
I have also ignored the distinction between interpreting public utterances and interpreting mental contents. Davidson assumes that they are part and parcel of the same activity, but he might well be wrong. Paul Churchland, for one, argues that human and animal concepts are fundamentally non-propositional in structure and thus imperfectly captured in public language. If so, Davidson is wrong to assume that any adequate conceptual scheme must thereby be true, since only sentences or semantic contents of sentences (propositions) can be true.
It thus conceivable that weird posthumans such as the Vile Offspring would not think in sentences and thus would not deal in truths at all. Admittedly, the same could be true of racoons and other non-human animals. Even if radical interpretation Davidson-style would not be an appropriate interpretative gambit, something like Dennett’s intentional stance – which makes no assumptions about inner or outer representational format at all – might be an option in a semantic emergency.
However, even if we assume that the intentional stance or radical interpretation could work in such situations, it does not follow that it will work for arbitrary interpreters. In particular, there is no guarantee that it will work some arbitrary human descendent of current humans. Thus Davidson’s and Dennett’s interpretationist approaches to content provide some grounds for believing that a Vile Offspring would not be a cognitive thing-in-itself sealed off from minds of a different kind. But this just means that if we could learn to follow whatever passes for inferences among the Vile Offspring and track the recondite facts that concern them we would understand vilese.
Yet vilese could as beyond any wide or narrow human capabilities as human-inference is beyond any racoon. Moreover, phenomenology could be a limiting factor here: a Vile Super-Intelligence might be exquisitely sensitive to perspectival facts that are fully objective, yet don’t show up for beings with a different kind of Dasein.
This problem does not seem to arise for humans interpreting Racoons because, I take it, we are much smarter than they are. We can easily mimic the inferences that they draw and we can easily reconstruct what is important for them. In the case of world chomping clouds of smart matter, we might not be so fortunate.
What are the implications of this for a Posthuman or post-Singularity Ethics?
Well, we have considered interpretationist grounds for believing that there could be no posthuman minds recalcitrant to interpretation in principle.
At best, we can infer that posthumans won’t be utterly transcendent – like the God of Negative Theology or Kant’s thing in itself. Thus a post-singularity existence might be interpretable in principle – if not by human successors of humans. However, it is important bear in mind that I’m not using “human” to designate beings with some essential biological or cognitive nature here. According to the disconnection thesis, being human is a matter of belonging to one of two historical entities: the Wide Human – a socio-technical assemblage – or the Narrow biological species that keeps it going. Neither has been defined in terms of necessary or essential properties.
If this is right, any barrier to interpretation liable to hamper human attempts to evaluate or explore posthuman modes of existence will hold contingently. For a given set of posthuman minds – like the Vile Offspring – to be radically uninterpretable by humans, it would need to be a necessary truth about humans that Vile Offspring minds could not be understood by humans. But if belonging to the Wide Human is the only condition on humanity, no being could be debarred from Wide Humanity on the grounds that it could understand weird posthumans like the Vile Offspring. Thus any interpretative barrier would be a contingent matter rather than a consequence of some human cognitive essence.
This does not imply that an interpretative blip will not occur; but that it is not inevitable. But what is not inevitable is, as Dennett quips, “evitable”. There is something someone (or something) can do about it.
Davidson, Donald (1984). “On the Very Idea of a Conceptual Scheme”, in Inquiries into Truth and Interpretation, Oxford: Clarendon Press.
Dennett, D. C. (1995). “Do Animals Have Beliefs?” Comparative approaches to cognitive science, 111.
Metzinger, Thomas. 2004. Being No One: The Self-Model Theory of Subjectivity. Cambridge, MA.: MIT Press 2004.
Roden, David (2013a). “The Disconnection Thesis”, in Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart (eds.), The Singularity Hypothesis: A Scientific and Technological Assessment , , Springer-Verlag: Berlin Heidelberg, 281-298.
Roden, David (2013b). ‘Nature’s Dark Domain: An Argument for a Naturalized Phenomenology’, in Human Experience and Nature, Royal Institute of Philosophy Supplement 72. London: Cambridge University Press.
Stross, Charles. 2006. Accelerando. London: Orbit.
Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era”, Vision-21:Interdisciplinary Science and Engineering in the Era of Cyberspace. Accessed 8 December 2007. http://www.rohan.sdsu.edu/faculty/vinge/misc/singularity.html.
 In contrast to the transparent multi-modal phenomenology of experience, human verbal thinking is relatively opaque since we are able to recollect earlier stages of processing to represent the syntactic and semantic properties of linguistic symbols.
Autonomous systems of the kind that we can conceive as emerging from our technology are liable to be modular assemblages of elements that can couple opportunistically with other entities or systems, creating new assemblages whose powers and dispositions are transformed and dynamically put into play by such couplings.
The best way of representing modularity is in terms of networks consisting of nodes and their interconnections. A network is modular if it contains “highly interconnected clusters of nodes that are sparsely connected to nodes in other clusters” (Clunes, Mouret and Lipson 2013, 1). In autonomous assemblages modules support functional processes that make a distinct and specialized contribution to maintaining the conditions necessary for other interdependent processes within the assemblage.
Modules may or may not be spatially localized entities. They may be relatively fragmented while exhibiting dynamical cohesion. An instance of a software object class such as an “array” (an indexed list of objects of a single type) need not be instantiated on continuous regions on a computer’s physical memory. It does not matter where the data representing the array’s contents is stored is physically located so long as the more complex program which it composes can locate that data when it needs it. Thus while it is possible that all assemblages must have some spatially bounded parts – organelles in eukaryotic cells and distributors in internal combustion engines come in spatially bounded packages, for example – not all functionally discrete parts of assemblages need be spatially discrete in the way that organelles are. Cultural entities such as technologies or symbols may consist of repeatable or iterable patterns rather than things and may be conceived as repeatable particular events than objects (Roden 2004). Yet in systems – such as socio-technical networks – whose components cued to recognize and respond to patterns, such entities can exert real causal influence by being repeated into varying contexts.
Importantly for our purposes, dynamical cohesion should not be conflated with functional stability. An entity can retain its dynamical integrity and intrinsic powers while subtending distinct wide functional roles in the systems to which it belongs. To use, Don Ihde’s term: such entities are functionally “multistable”. An Acheulian hand axe – a technology used by humans for over a million years – might have been used as a scraper, a chopper or a projectile weapon. Modern technologies such as mobile phones and computers are, of course, designed to be multistable; though their uses can exceed the specifications of their designers, as when a phone is used as a bomb detonator (Ihde 2012). It seems as if the decomposability of cognitive systems also confers multistability upon their parts thus contributing to the functional autonomy of the system as a whole.
In cognitive science, the classical modularity thesis held that human and animal minds contain encapsulated, fast and dirty, automatic (mandatory) domain-specific cognitive systems dedicated to specialized tasks such as kinship-evaluation, sentence-parsing or classifying life forms. However, it is an empirical question whether the mind is wholly or partly composed of domain-specific cognitive agents and, as Keith Frankish notes, a further empirical question whether neural modularity also holds: that is, whether domain-specified cognitive functions map onto anatomically discrete brain regions in the human brain such as Broca’s area (traditionally associated with language processing) or the so-called “Fusiform Face Area” (Frankish 2012, 280). Neither the classical theory of mental modules nor the neural modularity thesis follows from the fact that human brains are decomposable in the network sense presupposed by assemblage theory.
We should nonetheless expect autonomous entities such as present organisms or hypothetical posthumans to be network-decomposable assemblages rather than systems in which every part is equally coupled with every other part because modularity confers flexibility on known kinds of adaptive system. For example, in biological populations modularity is recognized as one of the necessary conditions of evolvability “an organism’s capacity to generate heritable phenotypic variation.” (Kirschner and Gehart 1998, 8420). Some biologists argue that the transition from prokaryotic cells (whose DNA is not contained in a nucleus) to more complex eukaryotic cells (who have nucleated DNA as well as more specialized subsystems such as organelles) was accompanied by a decoupling of the processes of RNA transcription and subsequent translation into proteins. This may have allowed noncoding (intronic) RNA to assume regulatory roles necessary for producing more complex organisms because the separation of sites allows the intronic RNA to be spliced out of the messenger RNA where it might otherwise disrupt the production of proteins. If, as seems to be the case, regulatory portions of intronic DNA and RNA are necessary for the production of higher organisms, then this articulation in DNA expression may have allowed the ancestor populations of complex multi-cellular organisms to explore gene-regulation possibilities without disabling protein expression (Ruiz-Mirazo, Kepa and Moreno 2012, 39; Mattick 2004).
The benefits of articulation apply at higher levels of organization in living beings for reasons that may hold for autonomous “proto-ex-artefacts” poised for disconnection. Nervous systems need to be “dynamically decoupled” from the environment that they map and represent because perception, learning and memory rely on establishing specialized information channels and long term synaptic connections in the face of changing environmental stimulation. This entails a capacity “for cells to step back from the manifold of ambient stimulus and to be prepared to pick and choose which stimulus to make salient and thus in so doing a capacity to enjoy an unprecedented level of internal autonomy” (Moss 2006 932–934; Ruiz-Mirazo, Kepa and Moreno 2012, 44).
Network decomposition of internal components also seems to carry advantages within control systems, including those that might actuate posthumans one day. Research into locomotion in insects and arthropods shows that far from using a central control system to co-ordinate all the legs in a body, each leg tends to have its own pattern generator.
A coherent motion capable of supporting the body emerges from the excitatory and inhibitory actions of the distributed system rather than through co-ordination by a central controller. The evolutionary rationale for distributed control of locomotion can be painted in similar terms to that of the articulation of DNA transcription and expression considered above – a distributed system being far less fragile in the face of evolutionary tinkering than a central control architecture in which the function of each part is heavily dependent on those of other parts.
This rationale plausibly applies to human beings and as well as to our immediate primate ancestors, especially in the case of sophisticate cognitive feats that require the organism to learn specific cultural patterns – such as languages – which would not have been stable or invariant enough to have selected for the component abilities that they require over evolutionary time (Deacon 1997, 322-334 – the Visual Word Form Area is a particularly spectacular example of such “cultural recycling” – see below). While this is compatible with network decomposition it may not tally with the classical modularity thesis since it suggests an evolutionary rationale for the promiscuous re-use of functionally multistable components.
Evidence from functional imaging suggests that anatomically discrete regions like Broca’s or the Fusiform Area are co-opted by evolutionary and cultural processes in support of functionally disparate cognitive tasks. For example, relatively ancient areas in the human brain known to be involved in motor control are also involved in language understanding. This suggests that circuits associated with grasping the affordances and potentialities of objects were recruited over evolutionary time to meet the emerging cultural demands of symbolic communication (Anderson 2007, 14). In a recent target article on neural-reuse in Behavioural and Brain Sciences Michael Anderson cites research suggesting that older brain areas tend to be less domain specific and more multistable – that is, that they tend to get re-deployed in a wider variety of cognitive domains (Anderson 2010, 247). Peter Carruthers and Keith Frankish likewise argue that circuits in the visual and motor areas which have been initially involved in controlling and anticipating actions have become co-opted in the production and monitoring of propositional thinking (beliefs, desires, intentions, etc.) through the production of inner speech. A an explicit belief, for example, can be implemented as a globally available action-representation – an offline “rehearsal” of a verbal utterance – to which distinctive commitments to further action or inference can be undertaken (Carruthers 2008). Andy Clark cites experimental work on Pan troglodytes chimpanzees which comports with the Carruthers’ and Frankish’s assumption that cognitive systems adapted for pattern recognition and motor control can be opportunistically reused to bootstrap an organism’s cognitive abilities. Here, an experimental group of chimps were trained to associate two different plastic tokens with pairs of identical and pairs of different objects respectively. The experimental group were later able to solve a difficult second-order difference categorization task that defeated the control group of chimps who had not been trained to use the tokens:
The more abstract problem (which even we sometimes find initially difficult!) is to categorize pairs-of pairs of objects in terms of higher order sameness or different. Thus the appropriate judgement for pair-of-pairs “shoe/shoe and banana/shoe” is “different” because the relations exhibited within each pair are different. In shoe/shoe the (lower order) relation is “sameness”; in banana/shoe it is difference. Hence the higher-order relation – the relation between the relations – is difference (Clark 2003, 70).
Interestingly, Clark notes that the chimps in the experimental group were able to solve the problem without repeatedly using the physical tokens, suggesting that they were able to associate the “difference” and “sameness” with inner surrogates similar to the offline speech events posited by Carruthers and Frankish (71; See also Wheeler 2004).
This account of the emergence of specialized symbolic thinking and linguistic thinking via the reuse of neural circuits evolved for pattern recognition and motor-control illustrates a more general ontological schema. Assemblages – whether human, inhuman, animate or inanimate – inherit the capacity to couple with larger assemblages from their structure and components and are similarly constrained by those powers. Carbon atoms have the power to assemble complex molecular chains because their four valence electrons permit the formation of multiple chemical bonds. Simpler prokaryotic cells may lack the capacity to evolve the regulatory networks required to form multicellular affiliations because their encoding process is insufficiently differentiated. Likewise, although specific neural circuits may be inherently multistable it does not follow that each can do anything. Each may have specific “biases” or computational powers that reflect its evolutionary origins (Anderson 2010, 247). For example, Stanislas Dehaene and Laurent Cohen review some remarkable results suggesting the existence of a Visual Word Form Area, a culturally universal cortical map situated in the fusiform gyrus of temporal lobe, which is involved in the recognition of discrete and complex written characters independently of writing system.
As Dehaene and Cohen observe, it is not plausible to suppose that the VWFA evolved specifically to meet the demands of literate cultures since writing was invented only 5400 years ago, while only a fraction of humans have been able to read for most of this period (Dehaene and Cohen 2007, 384). Thus it appears that the cortical maps in the VWFA have structural properties which make them ideal for reuse in script recognition despite not having evolved for the representation of written characters (among the factors suggested is that the VWFA is located in a part of the Fusiform area receptive to highly discriminate visual input from the fovea – 389).
Coupling an assemblage with another system – e.g. a transcultural code such as a writing or number system – may, of course, increase the functional autonomy of system by allowing it to respond fluidly and adaptively to the demands of its environment – enlisting new affiliations and resources which, then, come to be functional for it. Literacy and numeracy have become functionally necessary for economic activity in advanced industrial societies – clearly this was not always so! However, this is only possible because both the assemblage and its parts are open to functional shifts that, in effect, allow the creation of new social “megamachines” which extend beyond the coupled individuals. Thus while complex assemblages articulated into lots of functionally open systems may be more functionally autonomous than less articulated ones – are more capable of accruing new functions –they are more apt to be “deterritorialized” by happening on new modes of existence and new ways of being affected (DeLanda 2006, 50-51).
Anderson, Michael (2007). “Massive redeployment, exaptation, and the functional integration of cognitive operations”. Synthese, 159(3): 329-345,
Anderson, M. L. (2010). “Neural reuse: A fundamental organizational principle of the brain.” Behavioral and Brain Sciences, 33(4), 245.
Carruthers, Peter (2008). “An architecture for dual reasoning”. In J. Evans & K. Frankish (eds.), In Two Minds: Dual Processes and Beyond. Oxford University Press.
Clark, Andy (2003). Natural Born Cyborgs. Oxford: Oxford University Press.
Clune, J., Mouret, J. B., & Lipson, H. (2012). “The evolutionary origins of modularity”. arXiv preprint arXiv:1207.2743.
Deacon, Terrence. 1997. The Symbolic Species: The Co-evolution of Language and the Human Brain . London: Penguin.
Dehaene, S., & Cohen, L. (2007). Cultural recycling of cortical maps. Neuron,56(2), 384-398.
DeLanda, M. (2006), A New Philosophy of Society: Assemblage Theory and Social Complexity, London: Continuum.
Frankish, Keith (2012). “Cognitive Capacities, Mental Modules, and Neural Regions”. Philosophy, Psychiatry, and Psychology 18 (4).
Ihde, D. (2012). “Can Continental Philosophy Deal with the New Technologies?” Journal Of Speculative Philosophy, 26(2), 321-332.
Kirschner Marc and Gehart, John (1998). “Evolvability”, Proceedings of the National Academy of Sciences USA, 95, 8420-8427.
Moss, L. (2006). “Redundancy, plasticity, and detachment: The implications of comparative genomics for evolutionary thinking”. Philosophy of Science, 73, 930–946.
Roden, David (2004). ‘Radical Quotation and Real Repetition’, Ratio (new series) XVII 2 June 2004, 191-206.
Ruiz-Mirazo, Kepa & Moreno, Alvaro (2012). “Autonomy in evolution: from minimal to complex life”. Synthese 185 (1):21-52.
Wheeler, M. (2004). “Is language the ultimate artefact?.” Language Sciences, 26(6), 693-715.
 One of the benefits of so-called “objected oriented” programming languages (OO) like Java over “procedural” programming languages such as COBOL is that OO programs organize software objects in encapsulated modules. When a client object in the program has to access an object (e.g. a data structure such a list) it sends a message to the object that activates one of the objects “public” methods (e.g. the client might “tell” the object to return an element stored in it, add a new element or carry out an operation on existing elements). However, the client’s message does not specify how the operation is to be performed. This is specified in the code for the object. From the perspective of the client, the object is a black box that can be activated by public messages yielding a consumable output. This means that changes in how the proprietary methods of the object are implemented do not force developers to change the code in other parts of the program since these do not “matter” to the other objects. Maintenance and development of software systems becomes simpler.
 The cochlear cells in our inner are connected to hair like cells which are receptive to sound vibrations. This specialized arrangement allows the cochlear to conduct a fast spectrum analysis on incoming vibrations, assaying the relative amplitudes of components in complex sounds.
There’s a lively post by Uppinder Mehan over at IEET which amusingly illustrates why it is problematic to cash-out human status in terms of gross substrate similarly (two upper limbs, no claws, no spidey senses, etc. ). However, Mehan’s argument still depends on equally question-begging transhumanist assumptions about the substrate invariance of the human. Bottom line: Humans are the descendants of nonhumans and might well have nonhuman descendants. We are not yet in a position to know whether differently embodied minds or minds born of emergent AI programs will think like us or be able to co-exist with us. To think this through you need to add Speculative Posthumanism to the critical discourse of the ” Minnesota-style Posthumanities” and the normative doctrine of transhumanism.