Eric Schwitzgebel has a typically clear-eyed, challenging post on the implications of (real) artificial intelligence for our moral systems over here at the Splintered Mind. The take home idea is that our moral systems (consequentialist, deontologistical, virtue-ethical, whatever) are adapted for creatures like us. The weird artificial agents that might result from future iterations of AI technology might be so strange that human moral systems would simply not apply to them.
Scott Bakker follows this argument through in his excellent Artificial Intelligence as Socio-Cognitive Pollution , arguing that blowback from such posthuman encounters might literally vitiate those moral systems, rendering them inapplicable even to us. As he puts it:
The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development thatraises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines.
As any reader of Posthuman Life, might expect, I think Erich and Scott are asking all the right questions here.
Some (not me) might object that our conception of a rational agent is maximally substrate neutral. It’s the idea of a creature we can only understand “voluminously” by treating it as responsive to reasons. According to some (Davidson/Brandom) this requires the agent to be social and linguistic – placing such serious constraints on “posthuman possibility space” as to render his discourse moot.
Even if we demur on this, it could be argued that the idea of a rational subject as such gives us a moral handle on any agent – no matter how grotesque or squishy. This seems true of the genus “utility monster”. We can acknowledge that UM’s have goods and that consequentialism allows us to cavil about the merits of sacrificing our welfare for them. Likewise, agents with nebulous boundaries will still be agents and, so the story goes, rational subjects whose ideas of the good can be addressed by any other rational subject.
So according to this Kantian/interpretationist line, there is a universal moral framework that can grok any conceivable agent, even if we have to settle details about specific values via radical interpretation or telepathy. And this just flows from the idea of a rational being.
I think the Kantian/interpretationist response is wrong-headed. But showing why is pretty hard. A line of attack I pursue concedes to Brandom-Davidson that that we have the craft to understand the agents we know about. But we have no non-normative understanding of the conditions something must satisfy to be an interpreting intentional system or an apt subject of interpretation (beyond commonplaces like heads not being full of sawdust).
So all we are left with is a suite of interpretative tricks whose limits of applicability are unknown. Far from being a transcendental condition on agency as such, it’s just a hack that might work for posthumans or aliens, or might not.
And if this is right, then there is no a future-proof moral framework for dealing with feral Robots, Cthulhoid Monsters or the like. Following First Contact, we would be forced to revise our frameworks in ways that we cannot possible have a handle on now. Posthuman ethics must proceed by way of experiment.
Or they might eat our brainz first.
According to the Disconnection Thesis (Roden 2012; 2014: Chapter 5) a posthuman is an agent descended from some part of the human socio-technical system that has “gone feral”. In its ancestral form, it may have served human ends, or have been narrowly human itself, but (post-disconnection) has accrued values and roles elsewhere.
To date there are no posthumans so we can only guess at their likely powers. But it seems safe to assume that anything capable of cutting out of the human system would need to be at least as flexible and adaptable as humans are themselves.
These powerful entities might be indifferent to humans, but they may not like us at all; or like us in ways we would not like to be liked. They may view us as a threat, or they may be immensely powerful sadists who devote some part of their technological prowess to killing and torturing us. If posthumans are conceivable, so are very bad posthumans.
So can we do some contingency planning to ensure against the emergence of posthuman dark lords? To do this we would need some handle on the kind of current technologies that might induce a dark lord disconnection (DLD). But what kinds of technologies could these be?
It might seem that some technological possibilities can be discerned a priori – by consulting reliable conceptual “intuitions” about the extendible powers of current technologies. For example, a being like Skynet – the genocidal military computer in James Cameron’s Terminator films – seems a plausible occupant of a posthuman timeline; whereas Sauron, the supernatural dark lord of Tolkien’s Lord of the Rings, does not. However, since the work of Saul Kripke in the 1970’s many philosophers have come to accept that there are a posteriori natural possibilities and necessities that are only discoverable empirically. That light has a maximum velocity from any reference frame upsets common sense intuitions about relative motion and could not have been discovered by reflecting on pre-relativistic concepts of light.
Claims about hypothetical technological possibility may be as vulnerable to refutation as naive physics. States like the US and China employ computers to co-ordinate military activities so a Skynet seems the more plausible posthuman antagonist. But the fact that there are computers but no supernatural dark lords does not entail that their capacities could be extended in any way we imagine. Light bulbs exist as well as computers, but maybe a Skynet is no more technologically possible than Byron the Intelligent light bulb in Thomas Pynchon’s fabulist novel Gravity’s Rainbow.
So here’s a thing. Posthuman Possibility Space (the set of technically possible routes to disconnection) may contain a Dark Lord Possibility Sub-Space – the trajectories all of which lead to a DLD! We may not have any reliable indication of what (if anything) belongs to it. But, quite possibly, it is out there, waiting.
Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scientifc and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.
This is a sketch of a partial value theory that I’ve been developing while completing my book Posthuman Life. If there are similar theories out there, I’d be grateful for links to bibdata so that I can properly acknowledge them!
In order to construct an anthropologically unbounded account of posthumans, we need a psychology-free account of value. There may, after all, be many possible posthuman psychologies but we don’t know about any of them to date. However, the theory requires posthumans to be autonomous systems of a special kind: Functionally Autonomous Systems (see below). I understand “autonomy” here as a biological capacity for active self maintenance. The idea of a system which intervenes in the boundary conditions required for its existence can be used to formulate an Autonomous Systems Account of function which avoids some of the metaphysical problems associated with the more standard etiological theory. The version of ASA developed by Wayne Christensen and Mark Bickhard defines the functions of an entity in terms of its contribution to the persistence of an autonomous system, which they conceive as a group of interdependent processes (Christensen and Bickhard 2002: 3). Functions are process dependence relations within actively self-maintaining systems.
Ecological values are constituted by functions. The conception, in turn, allows us to formulate an account of “enlistment” which then allows us to define what it is to be an FAS.
1) (ASA) Each autonomous system has functions belonging to it at some point in its history. Its functions are the interdependent processes it requires to remain autonomous at that point.
2) (Value) If a process, thing or state is required for a function to occur, then that thing or process is a value for that function. Any entity, state or resource can be a value. For example, the proper functioning of a function can be a value for the functions that require it to work.
3) (Enlistment) When an autonomous system produces a function, then any value of that function is enlisted by that system.
4) (Accrual) An FAS actively accrues functions by producing functions that are also values for other FAS’s.
5) (Functional Autonomy) A functionally autonomous system (FAS) is any autonomous system that can enlist values and accrue functions.
People are presumably FAS’s on this account, but also nonhuman organisms and (perhaps) lineages of organisms. Likewise, social systems (Collier and Hooker 2009) and (conceivably) posthumans. To date, technical entities are not FAS’s because they are non-autonomous. Historical technologies are mechanisms of enlistment, however. For example. Without mining technology, certain ores would not be values for human activities. Social entities, such as corporations, are autonomous in the relevant and sense and thus can have functions (process interdependency relations) and constitute values of their own. However, while not-narrowly human, current social systems are wide humans not posthumans. As per the Disconnection Thesis: Posthumans would be FAS’s no longer belonging to WH (the Wide Human socio-technical assemblage – See Roden 2012).
This is an ecological account in the strict sense of specifying values in terms of environmental relations between functions and their prerequisites (though “environment” should be interpreted broadly to include endogenous and well as exogenous entities or states). It is also an objective rather than subjective account which has no truck with the spirit (meaning, culture or subjectivity, etc.). Value are just things which enter into constitutive relations with functions (Definition 2 could be expanded and qualified by introducing degrees of dependency). Oxygen was an ecological value for aerobic organisms long before Lavoisier. We can be ignorant of our values and mistake non-values for values, etc. It is also arguable that some ecological values are pathological in that they support some functions while hindering others.
The theory is partial because it only provides a sufficient condition for value. Some values – Opera, cigarettes, incest prohibitions and sunsets – are arguably things of the spirit, constituted as values by desires or cultural meanings.
Christensen, W. D., and M. H. Bickhard. 2002. “The Process Dynamics of Normative Function.” The Monist 85 (1): 3–28.
Collier, J. D., & Hooker, C. A. 1999. Complexly organised dynamical systems. Open Systems & Information Dynamics, 6(3): 241-302.
Roden. 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart.Springer Frontiers Collection.
 An issue I do not have time to consider is that ecological dependency is transitive. If a function depends on a thing whose exist depends on another thing, then it depends on that other thing. Ecological dependencies thus overlap.
 Addictive substances may fall into this class.
In this highly illuminating talk from EXPO1 at MOMA, Ray proposes that there is nothing inherently wrong with the transhuman reengineering of nature on the “promethean” grounds that nature has no ethical dispensation. Thus there is no natural, ontological or theological order violated by the extension of human cognitive powers or by the creation of synthetic life. Such processes are potentially violent and destructive, but that is acceptable as long as we distinguish between “good” emancipatory violence and that which oppresses and restricts the life chances of rational subjects.
I’m wholly in agreement with Ray in his rejection of theological objections to the technological refashioning of human and non-human nature. I’m less convinced that the idea of emancipation is an adequate horizon within which to adjudicate between the new world-engines that might lie before us. But I agree that we need some ethically substantive framework in which to do this. My own leaning is increasingly towards a pluralist moral realism – the claim that there are objectively good or bad locations in Posthuman Possibility Space but no moral hierarchy in which these are enfolded in turn. So to adjudicate these we need to “sample” them by experimenting with bodies, things and minds.
Ray also peppers his talk with some references to J G Ballard’s short story “The Voices of Time”, one of his many narratives of ontological catastrophe. Ballard’s own position on emancipation is profoundly ambivalent, as Baudrillard observes. Something to return to in later post or article, I think.
Charles Stross’ science fiction novel Accelerando provides a vivid and blackly funny portrayal of a transition from a merely transhuman to a genuinely posthuman world.
In Accelerando, the Singularity has arrived by the 22nd Century (Vinge 1993). The self-improving AI’s that now run the world are “wide human descendants” of human corporations and automated legal systems, which achieved both sentience and a form of legal personhood back in the 21st. As Stross’ narrator observes, the phrase “smart money” has taken on an entirely new meaning.
Eventually, these “corporate carnivores” – known as the “Vile Offspring” – institute a new economics (Economic 2.0) in which supply and demand relationships are computed too rapidly for those burdened by a “narrative chain” of personal consciousness to keep up. Under Economics 2.0 first person subjectivity is replaced “with a journal file of bid/request transactions” between autonomous software agents. E 2.0 is so remorselessly efficient that it comes to dominate not only the Earth but also the majority of the solar system. Whole planets pulverized and diverted to fast-thinking dust clouds of smart matter “blooming” around the sun (Stross 2006, 208-10):
This post-singularity scenario certainly seems bad for humans. Even their souped-up transhuman offspring prove equally incapable of functioning within E 2.0 and can only flee to the outer solar system and beyond as their worlds are “ethnically cleansed”.
At the same time, it is not clear that E 2.0 is really “good” for posthumans in a way that might conceivably outweigh its bad impact on humans.
If the posthuman entities – such as the Wide Descendents eating up the inner solar system of Stross’ novel – lack a linear, narrative consciousness, can their form of existence be worthy of ethical consideration?
Well, it might be argued that any being with conscious awareness – even one that does not involve rational subjectivity or personhood – is worthy of some moral consideration. Most accept that nonhuman animals are conscious of pains and pleasures and it is plausible to argue that their interest in avoiding pains and having pleasures are identical to humans.
However, many humanists claim that the reasoning prowess of humans distinguishes them radically from nonhuman animals. Responsiveness to reasons is both a cognitive and a moral capacity. For Kant, this capacity to choose the reasons for our actions – to form a will, as he puts it – is the only thing that is good in an unqualified way and is the most important distinguishing characteristic of humanity as opposed to animality.
Even humanists for whom the human capacity for self-shaping is one good among many, here, claim that “autonomy” confers a dignity on humans that should be protected by laws and cultivated.
Beings with the capacity for autonomy the moral status that goes with it are commonly referred to as “persons”. Locke defined a person as “a thinking intelligent being that has reason and reflection and can consider itself as itself, the same thinking thing in different times and places”. If Locke is right about the psychological preconditions for personhood, then beings such as the Vile Offspring cannot count as persons because, as Stross puts, their phenomenology lacks the “narrative centre” that a being needs to consider itself the same thing at different times. The practical rationality described in most post-Kantian conceptions of autonomy might not be accessible to a being with non-subjective phenomenology. Such an entity would be incapable of experiencing itself as having a life that might go better or worse for it.
If humanists are right to say that persons have special moral worth and we add to this the claim that there could be no nonpersons with greater or equivalent moral worth than persons, then very weird and very non-human posthumans such as Vile Offspring who lack personal phenomenology would not be as worthy of moral consideration as humans or transhumans.
Posthumans lacking personhood and the capacity for pleasure and pain would not be sources for any kind of moral claim. Posthumans lacking personhood but possessing functional equivalents of pleasure or pain could be granted an equivalent status to non-human animals that also lack the psychological prerequisites of personhood.
Posthuman singularity ethics would then be possible only in an etiolated form though it would not be applicable where our wide human descendants departed radically from human phenomenological invariants.
Perhaps it this is what accounts for the “vileness” of the Vile Offspring. That they are not conscious subjects with plans for life and conceptions of the good but churning clouds of super-intelligent matter driven by inchoate drives – like H P Lovecraft’s blind, idiot God, Azathoth.
However, this analytic of the vile is premature. For it assumes that there is a moral hierarchy mapping onto a psychological or phenomenological hierarchy. But the fact that there are beings – persons – with the distinctive mental properties described by Locke and Kant does not entail that all beings lacking these properties must be morally inferior, or even vile. For it is conceivable that there could be intelligent beings whose experience lacks some perquisites for personhood but have phenomenological attributes that are different but not morally inferior.
We humans might find it hard to conceive what such impersonal phenomenologies could be like (to say of them that they are “impersonal” is not to commit ourselves regarding the kinds of experiences they furnish). However, this difficulty may simply reflect the fact that our phenomenology constrains our grasp of phenomenological possibility and necessity (Metzinger 2004: 213; Roden 13b).
In particular, our phenomenology may be characterized by variable degrees of what Thomas Metzinger calls “autoepistemic closure”.
A phenomenology is autoepistemically closed if the processes that generate it are inaccessible within it. According to Metzinger, human personal experience is a dynamic and temporally situated model of the world, which represents the modeller as a distinct component. The phenomenal world model thus includes a phenomenal self-model or PSM. However, neither model represents the subpersonal cognitive processes that implement them. To borrow a phrase from Michael Tye: the phenomenal world-models and self-models are “transparent” – we seem to look through them into an immediately given world out there and a self-present mental life “in here” (Metzinger 2004 131, 165).
Both immediacies, according to Metzinger, are epistemic illusions generated by the model’s insensitivity to its computational underpinnings. There is no self or subject doing the looking. The experienced self is, rather, the simulated content of the PSM rather than the subpersonal process that generates it.
If, as Metzinger claims, we are not self-intimating Cartesian selves or Kantian transcendental subjects but self-models, it is little wonder that our phenomenology affords limited insight into the space of possible minds. For example, our subjectivity seems to exist in a spatial-temporal pocket: a situated, embodied self and an ever evolving present. It is characterized by a bivalent distinction between self and other, non-mine and mine and a sense of temporal newness – or presentationality – “a virtual window of presence” that gives us a baseline with which to distinguish actuality and simulated possibility (Ibid. 42, 96). But this representational scheme may depend on the fact that our sensory and motor systems are “integrated within the body of a single organism”. Other kinds of life – e.g. “conscious interstellar gas clouds” or (more saliently for us) decentred post-human “swarm” intelligences like the Vile Offspring – might have experiences of a quite different nature (Metzinger 2004: 161).
A physically distributed entity with computing power to burn might support a “multi-threaded” and “multi-level” phenomenology that tracks the adventures of distributed processing sites while providing high-resolution models of its own cognitive processes. Such a distributed consciousness might have a very different functional structure to human consciousness.
A multi-threaded phenomenology might employ different strategies for modelling relationships between the modeller and its environment. We cannot easily imagine what such a phenomenology would be like – but inability to imagine it is not a demonstration of its impossibility.
So it is at least conceivable that a nonhuman phenomenology could be impersonal, but have representational characteristics no less sophisticated than “higher order” moral properties such as autonomy in humans. If personhood and autonomy are not unique “higher-order moral properties” and we are not yet in a position to compare them with posthuman modes of being, then we have no grounds to assume that they trump other candidates for ethical consideration. So we have very weak grounds for believing that persons (or autonomous human subjects) stand at the moral summit or centre of creation.
If that is right, then a person-relativist humanist ethic should be rejected along with a species relativist one. There may be non-personal modes of existence following a singularity (or posthuman-maker) no less valuable than those accessible to persons. This is compatible with the claim that persons have some intrinsic moral worth – though it does not entail this. If this value is genuinely intrinsic it is presumably unaffected by the existence of different modes of existence with their own intrinsic worth.
I think this possibility implies a form of posthuman justice. This is not the postmetaphysical, procedural justice described by Rawls and other liberal anti-perfectionists. Posthuman justice cannot be predicated on “fair terms of co-operation” between citizens of a state since any human-posthuman disconnection would, arguably, preclude a republic of humans and posthumans (Roden 2013a).
Now, we could try to express a formal principle of justice on the basis of the assumption that there could be valuable posthuman forms of existence: for example:
We should give equivalent consideration to such modes of being, whatever they may be.
I use “equivalent” in favour of “identical” since it would be presumptive to describe a nonpersonal intelligence as having identical interests to a personal one.
However, this substitution does not achieve much. It does not tell us how these interests are equivalent or what duties might flow from the principle. As a guide to action or to life, the formal principle is not worth the pixels it is written in.
To invert Rawls’ famous disclaimer: the theory of posthuman justice is metaphysical, not political. It does not tell us what to do or how to coordinate our institutions. It just allows, (for want of countervailing arguments) that potential posthuman lives could support modes of existence that are not less than ours.
We could choose not to acknowledge these potential lives – were it possible to do so – but this refusal to acknowledge posthuman “otherness” would arguably be a kind of failure. It would be equivalent to the claim that something into which our insight is really very limited – “normal” human subjectivity and personhood – has a superior claim over the nonpersonal and potentially vile occupants of posthuman possibility space. This position might be warranted if our place in posthuman possibility space were not under consideration – e.g. if we were comparing the higher order moral properties of actual humans with actual nonhuman animals. But our attitude to our nonhuman Wide Descendants is at issue. Refusal to consider this possibility would be an intellectual failure as well as a kind of injustice.
Now, I think some would object that this capacious metaethical statement simply fails to do justice to the difficulty and danger attending an actual disconnection scenario. How, for example, could it guide us in an alien post-singularity environment of the kind described in Accelerando? There the remaining humans cannot communicate or interpret the “radically other” posthumans eating up the mass of the inner solar system (Near the end of Accelerando, the Vile Offspring start to resurrect every human who ever existed. Nobody finds out why.)
So we might concede the metaphysical principle that radically alien posthumans could merit some interpretative efforts on our part; but only if these were not futile.
No ethical principle should exhort us to act in vain, it seems. In cases where posthumans could be very radically alien, a Xenophobic Bias in favour of humans or fellow persons would appear to be the only ethical option that humans or persons could realistically pursue.
However, the idea of “radical alien” that is in play here is philosophically problematic.
Firstly, we should distinguish between kinds of alienness. The autoepistemic closure of human phenomenology may make it hard to imagine or understand some alien minds; it does not imply that such understanding is impossible.
Autoepistemic closure is not cognitive closure. The fact that our self-model does not represent itself as representational or computational does not entail that we could not acquire a theoretical grasp of its representational or computational structure – this is precisely the point of Metzinger’s work and of others working in the science of consciousness.
This argument applies generally. The fact that a being might have a very different experience of the world to ours does not entail that we could not come to understand how that experience is constituted. Nor does it entail that such beings would be uninterpretable. Ethologists and pet owners regularly apply what Dennett refers to as the “intentional stance” to nonhuman animals – cats, dogs or monkeys, say – without worrying about the minutiae of their phenomenology.
To take up the intentional stance to a system is to impute to it the beliefs and desires that it should have – given the kind of system it is – and then seeing whether its behaviour can be predicted on this basis. Dennett describes how we might apply the IS to racoons:
One can often predict or explain what an animal will do by simply noticing what it notices and figuring out what it wants. The raccoon wants the food in the box-trap, but knows better than to walk into a potential trap where it can’t see its way out. That’s why you have to put two open doors on the trap–so that the animal will dare to enter the first, planning to leave by the second if there’s any trouble. You’ll have a hard time getting a raccoon to enter a trap that doesn’t have an apparent “emergency exit” that closes along with the entrance (Dennett 1995).
The racoons’ responses to the one-door trap and its propensity to be seduced by the two door trap justifies the following interpretation of racoon mental life: that racoons have beliefs (or “beliefs”) about the numbers of doors in traps and that they are averse to traps with only one door. Thus racoons are intentional systems. This act of interpretation does not entail understanding what it is like for the Racoon to experience an aversion to one-door traps. Thus phenomenological similarity does not seem to be a necessary condition for interpreting nonhumans.
However, similarity of conceptual frameworks might be such a condition. If the racoons acted in a way that made it impossible to identify conceptual distinction such as between one and two-door traps, then this particular intentional stance interpretation would not be possible.
So could posthumans be radically alien by virtue of having concepts or conceptual schemes that no human could have?
At this point an objector might become suspicious of my talk of “alien” minds and phenomenologies, for there are well-rehearsed philosophical arguments against radically incommensurate or alien conceptual schemes or languages which give cause to be suspicious of the ‘very idea’ of the radically alien intelligences. The most famous of these is advanced by Donald Davidson in ‘On the Very Idea of a Conceptual Scheme’.
In ‘Idea’ Davidson claims that theories of conceptual incommensurability must construe conceptual schemes one of two ways: in terms of a Kantian scheme/content dualism; or a relation ‘fitting’ or ‘matching’ between language and world.
However, he argues the Kantian trope presupposes that the thing organized – experience, say – is composite in a way that affords comparison with our conceptual scheme after all (Davidson 2001a, 192). Since incommensurability implies incomparability, the propositional trope – fitting the facts or the totality of experience, or whatever – is all that is left. For Davidson, this just means that the idea of an acceptable conceptual scheme is one that is mostly true (Ibid. 194). So an alien conceptual scheme or language lights would be largely true but uninterpretable (Ibid.).
For Davidson’s interpretation-based semantics, this is equivalent to a language recalcitrant to radical interpretation. For interpretation-based semantics, to have content or meaning just is to be interpretable as having that content or meaning; whether by “native speakers” or by uninformed outsiders (“radical interpreters”) who start out with no knowledge of the idiom at all. Thus an uninterpretable conceptual scheme would not only be intelligible to “native insiders”. It would not have any content or meaning at all and thus would not be true or false of anything at all.
To re-state this in terms of the current problematic: interpretation-based semantics states that if alien posthumans had minds, they would have interpretable representational states capable of reliably tracking truths.
So Davidson’s position implies that, regardless of variations in phenomenology, there cannot be any radically uninterpretable minds: whether alien, animal or posthuman. Thus any posthuman mind should be interpretable, in principle, by any human mind. This suggests that Virnor Vinge’s concern that the singularity might take us beyond good and evil, into a world in which human ethical frameworks simply lack applicability are unfounded (Vinge 1993). Strictly speaking there could be no such thing as a radical alien.
In presenting the Davidsonian argument against radical aliens, I’ve skirted some difficult technical issues about the nature of interpretative theories: e.g. whether a theory of truth for a language can capture what a native speaker grasps when they understand the language.
I have also ignored the distinction between interpreting public utterances and interpreting mental contents. Davidson assumes that they are part and parcel of the same activity, but he might well be wrong. Paul Churchland, for one, argues that human and animal concepts are fundamentally non-propositional in structure and thus imperfectly captured in public language. If so, Davidson is wrong to assume that any adequate conceptual scheme must thereby be true, since only sentences or semantic contents of sentences (propositions) can be true.
It thus conceivable that weird posthumans such as the Vile Offspring would not think in sentences and thus would not deal in truths at all. Admittedly, the same could be true of racoons and other non-human animals. Even if radical interpretation Davidson-style would not be an appropriate interpretative gambit, something like Dennett’s intentional stance – which makes no assumptions about inner or outer representational format at all – might be an option in a semantic emergency.
However, even if we assume that the intentional stance or radical interpretation could work in such situations, it does not follow that it will work for arbitrary interpreters. In particular, there is no guarantee that it will work some arbitrary human descendent of current humans. Thus Davidson’s and Dennett’s interpretationist approaches to content provide some grounds for believing that a Vile Offspring would not be a cognitive thing-in-itself sealed off from minds of a different kind. But this just means that if we could learn to follow whatever passes for inferences among the Vile Offspring and track the recondite facts that concern them we would understand vilese.
Yet vilese could as beyond any wide or narrow human capabilities as human-inference is beyond any racoon. Moreover, phenomenology could be a limiting factor here: a Vile Super-Intelligence might be exquisitely sensitive to perspectival facts that are fully objective, yet don’t show up for beings with a different kind of Dasein.
This problem does not seem to arise for humans interpreting Racoons because, I take it, we are much smarter than they are. We can easily mimic the inferences that they draw and we can easily reconstruct what is important for them. In the case of world chomping clouds of smart matter, we might not be so fortunate.
What are the implications of this for a Posthuman or post-Singularity Ethics?
Well, we have considered interpretationist grounds for believing that there could be no posthuman minds recalcitrant to interpretation in principle.
At best, we can infer that posthumans won’t be utterly transcendent – like the God of Negative Theology or Kant’s thing in itself. Thus a post-singularity existence might be interpretable in principle – if not by human successors of humans. However, it is important bear in mind that I’m not using “human” to designate beings with some essential biological or cognitive nature here. According to the disconnection thesis, being human is a matter of belonging to one of two historical entities: the Wide Human – a socio-technical assemblage – or the Narrow biological species that keeps it going. Neither has been defined in terms of necessary or essential properties.
If this is right, any barrier to interpretation liable to hamper human attempts to evaluate or explore posthuman modes of existence will hold contingently. For a given set of posthuman minds – like the Vile Offspring – to be radically uninterpretable by humans, it would need to be a necessary truth about humans that Vile Offspring minds could not be understood by humans. But if belonging to the Wide Human is the only condition on humanity, no being could be debarred from Wide Humanity on the grounds that it could understand weird posthumans like the Vile Offspring. Thus any interpretative barrier would be a contingent matter rather than a consequence of some human cognitive essence.
This does not imply that an interpretative blip will not occur; but that it is not inevitable. But what is not inevitable is, as Dennett quips, “evitable”. There is something someone (or something) can do about it.
Davidson, Donald (1984). “On the Very Idea of a Conceptual Scheme”, in Inquiries into Truth and Interpretation, Oxford: Clarendon Press.
Dennett, D. C. (1995). “Do Animals Have Beliefs?” Comparative approaches to cognitive science, 111.
Metzinger, Thomas. 2004. Being No One: The Self-Model Theory of Subjectivity. Cambridge, MA.: MIT Press 2004.
Roden, David (2013a). “The Disconnection Thesis”, in Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart (eds.), The Singularity Hypothesis: A Scientific and Technological Assessment , , Springer-Verlag: Berlin Heidelberg, 281-298.
Roden, David (2013b). ‘Nature’s Dark Domain: An Argument for a Naturalized Phenomenology’, in Human Experience and Nature, Royal Institute of Philosophy Supplement 72. London: Cambridge University Press.
Stross, Charles. 2006. Accelerando. London: Orbit.
Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era”, Vision-21:Interdisciplinary Science and Engineering in the Era of Cyberspace. Accessed 8 December 2007. http://www.rohan.sdsu.edu/faculty/vinge/misc/singularity.html.
 In contrast to the transparent multi-modal phenomenology of experience, human verbal thinking is relatively opaque since we are able to recollect earlier stages of processing to represent the syntactic and semantic properties of linguistic symbols.