This piece from Una Sinnott is delicious. It demonstrates how work in the arts (here, experimental music) can feed into fundamental technologies, which can then hop between disparate applications (radio-controlled torpedoes, GPS). It’s case study in how womens’ contributions to technology get marginalised, and how patriarchy blows back.
“The Sobornost Station is large enough to have its own weather. The ghost-rain inside does not so much fall but shimmers in the air. It makes shapes and moves, and gives Tawaddud the constant feeling that something is lurking just at the edge of her vision.
She looks up, and immediately regrets it. Through the wet veil, it is like looking down from the top of the Gomelez shard. The vertical lines far above pull her gaze towards an amber-hued, faintly glowing dome almost a kilometer high, made of transparent, undulating surfaces that bunch together towards the centre, like the ceiling of a circus tent, segmented by the sharply curving ribs of the Station’s supporting frame.
Forms like misshapen balloons float beneath the vault. At first they look random, but as Tawaddud watches, they coalesce into shapes: the line of a cheekbone and a chin and an eyebrow. Then they are faces, sculpted from air and light, looking down on her with hollow eyes.”
(Rajaniemi 2012, 82)
Rajaniemi, Hannu (2012). The Fractal Prince. St Ives: Gollanz.
Stopped over in Athens Airport trying to digest three days at the Posthuman Politics conference at Mytilini, Lesbos, 25-28 September. It was an intense experience on so many levels and utterly worthwhile. My work has veered into some relentlessly abstract places recently, because someone has to … But having the privilege of attending Jaime del Val’s metahuman performance and Stefan Lorenz Sorgner’s star turn on metahumanist pedagogy was formative.
I’m not done with posthumanist metaphysics, or Scott’s semantic Götterdämmerung, but Stefan and Jaime are forging a value-pluralist posthuman politics with a real chance of productively mapping human-posthuman modes of embodiment and experience within an interdisciplinary framework. For what it’s worth, I think their open-textured practice may constitute our most tenable (if still precarious) path through the posthuman predicament. It has direct implications for public policy (e.g. Stefan’s argument for genetic engineering in education) – perhaps even for getting out of the neoliberal quagmire. None of this, of course, begins to convey the energy and intellectual openness of the event or the delightful hospitality of Evi Sampanikou and the humans and nonhumans of the University of the Aegean.
Continuing the “dark” posthumanism strand from recent blog posts and from my book Posthuman Life: Philosophy at the Edge of the Human (Routledge 2014), I argue that we cannot extend our moral thinking to certain portions of “posthuman possibility space” because our folk psychology and parochial norms of practical reasoning might not apply to “hyperplastic” posthumans. I conclude that there are no good ground to reject the possibility that there are non-persons every bit as morally considerable as persons. Paper on academia.edu here.
According to the Disconnection Thesis (Roden 2012; 2014: Chapter 5) a posthuman is an agent descended from some part of the human socio-technical system that has “gone feral”. In its ancestral form, it may have served human ends, or have been narrowly human itself, but (post-disconnection) has accrued values and roles elsewhere.
To date there are no posthumans so we can only guess at their likely powers. But it seems safe to assume that anything capable of cutting out of the human system would need to be at least as flexible and adaptable as humans are themselves.
These powerful entities might be indifferent to humans, but they may not like us at all; or like us in ways we would not like to be liked. They may view us as a threat, or they may be immensely powerful sadists who devote some part of their technological prowess to killing and torturing us. If posthumans are conceivable, so are very bad posthumans.
So can we do some contingency planning to ensure against the emergence of posthuman dark lords? To do this we would need some handle on the kind of current technologies that might induce a dark lord disconnection (DLD). But what kinds of technologies could these be?
It might seem that some technological possibilities can be discerned a priori – by consulting reliable conceptual “intuitions” about the extendible powers of current technologies. For example, a being like Skynet – the genocidal military computer in James Cameron’s Terminator films – seems a plausible occupant of a posthuman timeline; whereas Sauron, the supernatural dark lord of Tolkien’s Lord of the Rings, does not. However, since the work of Saul Kripke in the 1970’s many philosophers have come to accept that there are a posteriori natural possibilities and necessities that are only discoverable empirically. That light has a maximum velocity from any reference frame upsets common sense intuitions about relative motion and could not have been discovered by reflecting on pre-relativistic concepts of light.
Claims about hypothetical technological possibility may be as vulnerable to refutation as naive physics. States like the US and China employ computers to co-ordinate military activities so a Skynet seems the more plausible posthuman antagonist. But the fact that there are computers but no supernatural dark lords does not entail that their capacities could be extended in any way we imagine. Light bulbs exist as well as computers, but maybe a Skynet is no more technologically possible than Byron the Intelligent light bulb in Thomas Pynchon’s fabulist novel Gravity’s Rainbow.
So here’s a thing. Posthuman Possibility Space (the set of technically possible routes to disconnection) may contain a Dark Lord Possibility Sub-Space – the trajectories all of which lead to a DLD! We may not have any reliable indication of what (if anything) belongs to it. But, quite possibly, it is out there, waiting.
Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scientifc and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.
BRANDOM AND POSTHUMAN AGENCY: AN ANTI-NORMATIVIST RESPONSE TO BOUNDED POSTHUMANISM
David Roden, Open University
Introduction: Bounded Posthumanism
Posthumanism can be critical or speculative in orientation. Both kinds are critical of human-centered (anthropocentric) thinking. However, their rejection of anthropocentricism applies to different areas: Critical Posthumanism (CP) rejects the anthropocentrism of modern philosophy and intellectual life; Speculative Posthumanism (SP) opposes human-centric thinking about the long-run implications of modern technology.
Whereas critical posthumanists are interested in the posthuman as a cultural and political condition, speculative posthumanists are interested in a possibility of certain technologically created nonhuman agents. They claim that there could be posthumans – where posthumans would be “wide human descendants” of current humans that have become nonhuman in virtue of some process of technical alteration (Roden 2012; 2014, Chapter 5).
Despite differences in concern and methodology, however, CP and SP have convergent interests. CP requires that there are no transcendental conditions for agenthood derivable from parochial facts about human agency. If this is right, it must true of possible nonhuman agents as it is of actual nonhuman agents.
For this reason, I distinguish two claims regarding technological successors to current humans: an anthropologically bounded posthumanism (ABP); and an anthropologically unbounded posthumanism (AUP).
- There are unique constraints C on cognition and agency which any being qualifying as a posthuman successor to humans must satisfy.
- Agents satisfying C can know that they are agents and can deduce a priori that they satisfy C (they are transcendental constraints)
- Humans typically satisfy C.
ABP’s import becomes clearer if we consider the collection of histories whereby posthuman wide descendants of humans could feasibly emerge. I refer to this set as Posthuman Possibility Space (PPS – See Roden 2014: 53).
Given that posthumans would be agents of some kind (See Chapter 6) and given ABP, members of PPS would have to satisfy the same transcendental conditions (C) on agency as humans.
Daryl Wennemann assumes something along these lines in his book Posthuman Personhood. He adopts the Kantian idea that agency consists in the capacity to justify one’s actions according to reasons and shared norms. For Wennemann, a person is a being able to “reflect on himself and his world from the perspective of a being sharing in a certain community.” (Punzo 1969, cited in Wennemann 2013: 47). This is a condition of posthuman agency as much as of human agency
This implies that, whatever the future throws up, posthuman agents will be social and, arguably linguistic beings, even if they are robots or computers, have strange bodies, or even stranger habits. If so, PPS cannot contain non-anthropomorphic entities whose agency is significantly nonhuman in nature.
ABP implies that there are a priori limits on posthuman weirdness.
AUP, by contrast, leaves the nature of posthuman agency to be settled empirically (or technologically). Posthumans might be social, discursive creatures; or they might be different from us in ways that we cannot envisage short of making some posthumans or becoming posthuman ourselves.
AUP thus extends the critical posthumanist rejection of anthropocentrism to the deep time of the technological future. In Posthuman Life I defended it via a critique of Donald Davidson’s work on intentionality; coupling this with a “naturalistic deconstruction” of transcendental phenomenology in its Husserlian and Heideggerian forms (See also Roden 2013).
Some of these arguments, I believe, carry over to the more overtly normativist philosophy of Robert Brandom – a philosopher whose work I did not address in detail there (for reasons of space and incompetence). The account of the relationship between normativity, social practice, intentionality that Brandom provides in Making It Explicit, and in other writings, is one of most impressively detailed, systematic and historically self-aware attempts to explain subjectivity, agency and intentionality in terms of social practices and statuses. It thus merits the appraisal of all philosophical posthumanists, whether they are of a critical or a speculative bent.
First and Second-Class Agents
I will begin with a thumbnail sketch of how Brandom derives a priori conditions of possibility for agency and meaning from a theory of social practices. Then I will consider whether its foundations are capable of supporting this transcendental superstructure.
Brandom is a philosophical pragmatist. Like other pragmatists, he is committed to the claim that our conceptual and intellectual powers are grounded in our practical abilities rather than in relations between mental entities and what they represent (Brandom 2006).
His pragmatism implies a species of interpretationism with regard to intentional content. Interpretationists, like Daniel Dennett, claim that intentional notions such as “belief” do not track inner vehicles of content but help us assess patterns of rational activity on the part of other “intentional systems” (Wanderer 2008). Belief-desire talk is not a folk psychological “theory” about internal states, but a social “craft” for evaluating and predicting other rational agents.
For Dennett, an entity qualifies as an agent with reasons if predicting its behaviour requires interpreters to attribute it the beliefs and desires it ought to have given its nature and environment. A being whose behaviour is voluminously predictable under this “intentional stance” is called an “intentional system” (IS). In IS theory, there is no gap between predictability under the intentional stance and having real intentionality.
Brandom endorses Dennett’s claim that intentional concepts are fundamentally about rendering agency intelligible in the light of reasons. However, he argues that IS theory furnishes an incomplete account or intentionality. Interpretation is an intentional act; thus interpretationists need to elucidate the relationship between attributed intentionality and attributing intentionality. If we do not understand what kind of being could count as a prospective interpreter, we cannot claim to have understood what it is to attribute intentionality in the first place (Brandom 1994: 59).
Brandom goes one further. The intentionality attributed to intrinsically meaningless events or linguistic inscriptions seems entirely derived from interpreters. Similarly with relatively simple IS’s. Maze-running robots or fly-catching frogs can properly be understood from the intentional stance – making them true-believers by Dennett’s lights. But their intentionality seems likewise observer-relative; derived from attitudes of interpreting IS’s (60). To hold otherwise, he argues, is to risk a disabling regress. For if intentionality is derivative all the way up, there can be no real intentional attributions and thus no derivative (non-observer relative) intentionality (60, 276).
Brandom claims that his theory can be read as an account of the conditions an organism must satisfy to qualify an interpreting intentional system; that is to warrant attributions of non-derived intentionality rather than the as-if intentionality we can attribute to simpler organisms or complex devices.
Whatever else the capacity for original or “first class” intentionality includes, it must involve the ability to evaluate the cognizance and rationality of similar beings and thus to be answerable to reasons (61). Entities with first-class intentionality and thus the capacity to assess and answer to reasons in this way are referred to by Brandom as sapient. Entities with only derived intentionality may exhibit the sentient capacity to react in discriminating and optimizing ways to their environment, but the conceptual content of these responses is attributed and observer-relative.
The claim that intentionality or the capacity for objective thought implies the capacity to evaluate other thinkers obviously has a rich post-Kantian lineage. However, one of the clearest arguments for connecting intentionality and the capacity for other-evaluation is provided by Donald Davidson in his essay “Thought and Talk” (Davidson 1984: 155-170).
Davidson begins with the assumption that belief is an attitude of “holding” true some proposition: for example, that there is a cat behind that wall. If belief is holding true it entails a grasp of truth and the possibility of being mistaken; and thus a concept of belief itself. Thus we cannot believe anything without the capacity to attribute true or false beliefs about the same topic to our fellow creatures (Davidson 1984: 170; 2001b: 104).
This capacity presupposes linguistic abilities, according to Davidson, because attributing contents to fellow creatures requires a common idiom of expression. Absent this, the possession of a concept of belief and, thus, the very having of beliefs, is impossible.
Brandom agrees! We need language to have and attribute beliefs, and, by extension, practical attitudes corresponding to desires and intentions (231-2). However, his official account avoids talk of beliefs or intentions in order to steer clear of the picture of beliefs, etc. as inner vehicles of content (sentences in the head, say) rather than social statuses available to discursive creatures like ourselves.
For Brandom, the primary bearers of propositional content are public assertions. Thus he bases his elaborate theory of intentionality not on a theory of mental representations or sub-propositional concepts, but on a pragmatic account of the place of assertions within the social game of “giving and asking for reasons”.
Correlatively, Brandom’s semantics begins with an explanation of how assertions – and their syntactical proxies, sentences – acquire propositional content. Like Wilfred Sellars’ brand of functional semantics, it is framed in terms of the normative role of utterances within social practices which determine how a speaker can move from one position in the language-game to another.
In the case of assertions, the language-transition rules correspond to materially correct inferences such as that x is colored from x is red. Language entry-rules include observation statements which allow us to make claims like “There is snow on the grass” on the basis of our reliable dispositions to differentially respond (RDRDs) to recurrent states of our environment. Finally, “language exit rules” correspond to practical commitments to forms of non-linguistic action.
Thus Brandom agrees with other post-Wittgensteinian pragmatists that linguistic practices are governed by public norms. However, he follows Davidson in rejecting the “I-we” conception of social structure. (39-40; Davidson 1986). If meanings are inferential roles (as Dummett and Sellars also claim), then the content attributable to expressions will dance in line with the doxastic commitments of individual speakers.
Suppose one observes a masked figure in a red costume clambering up a skyscraper. The language entry rules ambient within your community of English speakers may entitle you (by default) to claim that Spiderman is climbing the building. However, you are unaware that Spiderman is none other than Peter Parker. So you are not yet entitled to infer that Peter Parker is climbing the building – although the substitutional rules of English would commit you to that further inference if (say) some reliable authority informed you of this fact.
This simple example shows that the inferential roles – thus meanings – of expressions like “Spiderman” are not fixed communally but have to vary with the auxiliary assumptions, sensitivities and dispositions of individual speakers. Understanding or interpreting the utterances and beliefs of others is thus a matter of deontic scorekeeping – that is keeping track of the way social statuses alter as speakers update their inferential commitments (Brandom 1994: 142).
Thus semantic and intentional content are co-extensive with the normative-functional roles of states and actions. It follows that what a belief or claim “represents” or is “about” is fixed by its status it can be ascribed from the perspective of various deontic scorekeepers (including the believer or claimant).
The second consequence – which I flagged earlier – is that a serious agent or thinker must, as Davidson held, be a language user. The inferential relations attributed by scorekeepers to pragmatically defined occurrences can only be expressed by a structured language with components such as predicates, singular terms and pronouns. Inferential roles are only learnable and projectable on this basis (Brandom 1994: Chapter 6). Thus Brandom’s account provides a pragmatic-semantic story with which to transcendentally partition PPS.
If posthumans are to be intentional agents in thrall to concepts, they will be subjects of discourse assessing one another according to public inferential proprieties.
The Norm-Grounding Problem
However, we only have reason to adopt this a priori portioning of PPS if normativism can contend with some difficult foundational issues. I will refer to the most pressing of these as “the norm-grounding problem”.
Brandom’s pragmatics implies that the rules which furnish deontic statuses are implicit in what we do, in our linguistic and non-linguistic performances, rather than in some explicit set of semantic rules. But what does it mean for a norm to be implicit in a practice? (Brandom 1994: 29-30; Hattiangadi 2003: 420; Rosen 1997).
Are norms a special kind of fact, to which our practices conform or fail to conform? If there were normative facts that transcended our actions, this could at least explain how our inferences can be held to account by them.
Brandom rejects factualism regarding norms. They are not, he claims, “part of the intrinsic nature of things, which is entirely indifferent to them” (48: Rosen 1997: 163-4).
This seems wise. If there were Platonic norms, it is far from clear how animals like us, or our evolutionary forebears, could come to be aware of them (see next section).
Brandom thus adopts a nonfactualist or “phenomenalist” position regarding norms. Non-normative reality is “clothed” in a web of normative statuses when speakers treat public actions as correct or incorrect, permitted or entitled (Brandom 1994: 48).
However, before considering Brandom’s nonfactualist account of norms in greater detail, it is instructive to consider a superficially appealing position that he rejects: regularism. Regularism is the claim that norms are regularities. To act according to a norm (or follow a rule) is simply to behave in conformity with a regularity (27).
Regularism is consonant with pragmatism because one can obey a regularity without having explicit knowledge of it – thereby avoiding the vicious regress that ensues if we require that semantical rules need to be explicitly grasped by speakers (Brandom 1994: 24-5). Regularism is also appealing to philosophical naturalists since it explains how norms depend (or supervene) on facts about the physical state and structure of individual speakers.
However, Brandom rejects this attempt to ground normative claims in factual claims. Here he follows Kripke’s reading of Wittgenstein’s discussion of rule-following: pointing out that any finite sequence of actions will conform to many or even an infinite number of regularities. Thus there is no such thing as the regularity that a finite performance conforms to. For any continuation of that performance “there is some regularity with respect to which it counts as ‘going on in the same way’” (MIE, 28). There are just too many ways of gerrymandering regularities for any given continuation of a performance and the simple regularity view provides no basis for selecting between them. So the simple regularity account fails to explain how a determinate norm can be implicit in practice.
The standard response to the failure of the simple regularity view is to shift attention from finite stretches of performance “to the sets of performances (for instance, applications of a concept) the individual is disposed to produce” (ibid: my emphasis).
The appeal of unpacking grasping a rule in terms of dispositions is that one can be disposed to do an infinite number of things which one does not actually do because of the absence of triggering input (Martin and Heil 1998: 284).
Thus it might seem that we can avoid the gerrymandering objection by saying that different agents A and B grasp the same rules where they are disposed to perform identically given the same triggering inputs.
However dispositionalism seems unable to account for misapplications of a rule.
A might be disposed to behave in the same ways under the same triggering conditions as B but whereas A is correctly following a rule (say plus) B is incorrectly following a different rule (normative behaviour is compatible with recalcitrance [Brandom 1994: 31]). Thus even though A and B exactly coincide in both their actual and their counterfactual performances, they can be following different rules (Martin and Heil 1998: 284-5). Thus if we unpack dispositions counterfactually we will be unable to account for mistakes in application or reasoning. Thus this version of dispositionalism, at least, is unable to explain how norms repose in practices.
So dispositions (if counterfactually conceived) do not help us solve the norm-grounding problem.
Deontic Statuses and Deontic Attitudes
As advertised, Brandom’s favoured account of norms is nonfactualist. We “clothe” a nonnormative world in deontic statuses by taking certain actions or utterances to be correct or incorrect (Brandom 1994: 161).
So normative statuses arise only insofar as there are creatures who can treat one another as committed or entitled to do this or that. In Brandom’s terminology: deontic statuses as assigned when creatures adopt deontic attitudes towards one another.
But what are deontic attitudes?
If they are necessarily intentional – like propositional attitudes – Brandom is stuck in a regress. The philosophical attraction of normative functionalism is that it promises to reduce intention-talk to norm-talk. If deontic attitudes are necessarily intentional, however, he has made little progress in explaining interpreting intentionality via social practices.
Moreover, his account would fail to accord with a minimal Darwinian naturalism. Norm instituting powers cannot have appeared fully formed but must have emerged gradually from the scum of sentience (Rosen 1997). Thus Brandom’s account must be consistent with the claim that merely sentient creatures capable only of reliable discriminatory dispositions to differentially respond to their environments (RDRD’s) – could non-magically acquire a sapient responsiveness to reasons.
Brandom is sensitive to these requirements. He argues that deontic attitudes can occur in “prelinguistic communities” which lack full noetic and agential powers (161). The simplest model of deontic attribution that he provides is one in which performances are assessed as something the performer is authorized by the withholding of sanctions – where sanctioning behaviour, here, is a manifestation of differentially responsive dispositions and not florid interpretative powers.
For example, the deontic status of being entitled to pass through a door might be instituted by a ticketing system in which “the ticket-taker is the attributer of authority, the one who recognizes or acknowledges it and who by taking the ticket as authorizing, makes it authorizing, so instituting the entitlement” (161) This account can be complicated if we introduce deontic attitudes that institute responsibilities on the part of agents.
For example, taking the Queen’s shilling makes one liable to court martial if certain military duties are not undertaken (163). According to Brandom these cases illustrate how social actors can partition “the space of possible performances into those that have been authorized and those that have not, by being disposed to respond differently in the two cases” (161-2: emphasis added).
Does this model show that Brandom’s account can satisfy the minimal naturalist constraints that he recognizes? A number of commentators – including Daniel Dennett and Anandi Hattiangadi – have pointed out that that it succumbs to the gerrymandering objections that Brandom cites against regularism (Dennett 2010; Hattiangadi 2003). Any performative regularities (actual or counterfactual) exhibited by actors and sanctioners in this simple model will be consistent with multiple normative readings of either behaviours – including interpretations which render the “deontic attitudes” mistaken. If the gerrymandering argument refutes regularist theories of rule-following, it refutes dispositionalist accounts of deontic attitudes.
As Hattiangadi points out, beefing up the noetic powers of instituters will avail little. If we furnish sanctioners with the power to make contentful judgements (about whether an agent is entitled to pass through the door, for example) we are already in the realm of the intentional (Hattiangadi 2003: 428).
It follows that a naturalistically constrained normativism does not appear able to explain how social beings can institute norms, thus normative statuses, thus determinate inferential semantic contents, without a vitiating appeal to florid intentional powers.
The Interpretationist Defense
Can Brandom’s account be repaired in a way that meets his minimal naturalist commitment?
Well, one defense that seems consistent with Brandom’s avowals elsewhere is to follow Davidson and Dennett by claiming that the certain kinds of social behaviour are norm-governed if a) members of our speech community would properly interpret them as normative or b) if an ideally rational interpreter privy to all the relevant behavioral facts would read them as normative. This response has something to recommend it. When interpreting alien social practices we are liable to appeal to our own background assumptions about what performances belong to the sortal “social practice”. Moreover, appealing to notion of an ideal interpreter can be of value when trying to understand the theoretical and empirical constraints on attributions of semantic or normative content.
However, as Hattiangadi remarks, this response misses the point of the dispositional analysis of deontic attitudes. This was to explain how a non-sapient community could bootstrap itself into sapience by setting up a basic deontic scorekeeping system. Appealing to actual or ideal interpreters simply replicates the problem with Dennett’s intentional stance approach since it tells us nothing about the conditions under which a being qualifies as a potential interpreter and thus little about the conditions for meaning, understanding or agency.
Similar problem afflicts Joseph Heath’s (2001) proposal that Brandomian norms emerge from reciprocal expectations supported by sanctions. The idea is that a first person acts in a certain way while expecting a sanctioning response from a second person. The second person, meanwhile, is disposed to respond to certain performances with sanctioning behaviour while the first person recognizes this. Where this minimal intersubjective couple converges towards a single pattern of behaviour over time, Heath argues, we are entitled to treat their activity as implying a norm.
Heath’s proposal may be fine if we assume that certain intentional powers are already in place – e.g. that each individual both expects and sanctions the activity of the other. However, as Hattiangadi’s appeal to the gerrymandering argument shows, this structure presupposes beings capable of intentional states such as expecting and sanctioning. This is presumably what distinguishes it from simpler cases of dynamical coupling where two physical systems converge towards a single pattern of behaviour. But if the normativist is serious about explaining the intentional in normative terms, they are not entitled to these assumptions.
If Brandom is right about the defects of Dennett-style or Davidson-style interpretationism, the tendency for his own account to regress to those positions is telling. It suggests that interpretationist accounts cannot explain the semantic or the intentional without regressing to assumptions about ideal interpreters or background practices whose scope they are incapable of delimiting.
The point is not that interpretationism is false but that it is ultimately unilluminating. It is empirically unproblematic that we interpret other speakers, texts, cultural artifacts, etc. However, if in-principle interpretation according to the intentional stance fixes the content of intentional discourse, but nature of such interpretation is ill-defined we have merely satisfied our curiosity about the nature of mindedness by appealing to our local mind-reading techniques. We do not yet know what the invariants (if any) of intentional interpretation are. Another way of putting this is that our practices of interpretation and deontic assessment are phenomenologically “dark”. The fact that we have them and have a little empirical knowledge of them leaves us ignorant both of their underlying nature and (by extension) of the space of interpretative and psychological possibility. Normativist ABP and its interpretationist variants thus provide no future-proof constraints on the space of possible minds or possible agents (See also Bakker 2014).
If so, then they provide no warrant for the claim that any serious agent must be a “subject of discourse” able to measure its own performances against public standards. Presumably, humans are agents of this kind, but the phenomenological darkness surrounding normativity implies that we should not presume that we understand what normativity must involve.
It follows that Anthropologically Unbounded Posthumanism is not seriously challenged by the argument that mind and meaning are constituted by social practices. AUP implies that we can infer no claims about the denizens of Posthuman Possibility Space a priori, by reflecting on the pragmatic transcendental conditions for semantic content. We thus have no reason to suppose that posthuman agents would have to be subjects of discourse of members of communities.
Nor (given our lack of any transcendental grasp of agency) are we entitled to reflect on the ethical status of very strange posthumans. We have no future-proof grasp of how strange posthumans might be, so we lack any basis for adjudicating the moral status of such beings. We may buy into a parochial humanism which accords humans subjects a level of moral consideration that is greater than the nonhuman creatures we know about. But this does not entail that there are not morally considerable states of being in PPS of which we are currently unaware which have little in common with the modes of being accessible to current humans. If posthuman politics is anthropologically unbounded, in this way, then any ethical assessment of the posthuman must follow on its historical emergence. If we want to do serious posthuman ethics, we need to make posthumans or become posthuman.
Bakker, Scott. 2014. The Blind Mechanic II: Reza Negarestani and the Labor of Ghosts | Three Pound Brain. Retrieved April 30, 2014, from https://rsbakker.wordpress.com/2014/04/13/the-blind-mechanic-ii-reza-negarestani-and-the-labour-of-ghosts
Brandom, R. 1994. Making it Explicit: Reasoning, representing, and discursive commitment. Harvard university press.
Brandom, R. 2001. Articulating Reasons: An Introduction to Inferentialism. Cambridge Mass.: Harvard University Press.
Brandom, R. 2002. Tales of the Mighty Dead: Historical Essays in the Metaphysics of Intentionality. Cambridge: Cambridge University Press.
Brandom, R. 2006. “Kantian Lessons about Mind, Meaning, and Rationality.” Southern Journal of Philosophy 44: 49–71.
Brandom, R. 2007. “Inferentialism and Some of Its Challenges.” Philosophy and Phenomenological Research 74 (3): 651–676.
Brassier, R. 2011. “The View from Nowhere.” Identities: Journal for Politics, Gender and Culture (17): 7–23.
Davidson, D. 1986. “A Nice Derangement of Epitaphs.” In Truth and Interpretation, E. LePore (ed), 433-46. Oxford: Blackwell.
Davidson, D. 1984. Inquiries into Truth and Interpretation. Oxford: Clarendon Press.
Dennett, D. C. (1989). The intentional stance. MIT press.
Dennett, D. C. (2010). The evolution of “why?”: An essay on Robert Brandom’s Making it explicit.
Hattiangadi, A. (2003). Making it implicit: Brandom on rule following. Philosophy and Phenomenological Research, 66(2), 419-431.
Heath, J. (2001). Brandom et les sources de la normativité. Philosophiques, 28(1), 27-46.
Heil, John & Martin, C. B. (1998). Rules and powers. Philosophical Perspectives 12 (S12):283-312.
Hohwy, J. (2006). Internalized meaning factualism. Philosophia, 34(3), 325-336.
Kraut, Robert (2010). Universals, metaphysical explanations, and pragmatism. Journal of Philosophy 107 (11):590-609.
Lewis, Kevin. 2013. ”Carnap, Quine and Sellars on Abstract Entities”, https://www.academia.edu/2364977/Carnap_Quine_and_Sellars_on_Abstract_Entities (Accessed 12-7-14)
Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scienti?c and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.
Roden, David. 2013. “Nature’s Dark Domain: An Argument for a Naturalised Phenom- enology”. Royal Institute of Philosophy Supplements 72: 169–88.
Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. Routledge.
Rosen, Gideon. 1997. “Who Makes the Rules Around Here?”, Philosophy and Phenomenological Research, Vol. 57, No. 1 (Mar., 1997), pp. 163-171
Wanderer, Jeremy (2008). Robert Brandom. Acumen/McGill-Queens University Press.
Wennemann, D. J. 2013. Posthuman Personhood. New York: University Press of America
This formulation allows that posthumans could be descended from technological assemblages which are existentially dependent on servicing “narrow” human goals. Becoming nonhuman in this sense is not a matter of losing a human essence but of ceasing to belong to a human-oriented socio-technical system: the Wide Human (Roden 2012; 2014). I refer to the claim that becoming posthuman consists in becoming independent of the Wide Human as “the Disconnection Thesis”.
Brandom also follows Kant in trying to understand semantic notions like reference and truth in terms of their roles in articulating judgement rather than as semantic or representational primitives.
Intentional systems are unlikely to contain sawdust or stuffing, but IS theory is agnostic regarding their internal machinery or phenomenology. Thus IS theory undercuts both eliminativist and reductionist accounts of intentionality while providing a workable methodology investigations into the mechanisms that actuate intentional systems.
“The key to the account is that an interpretation of this sort must interpret community members as taking or treating each other in practice as adopting intentionally contentful commitments and other normative statuses” (Brandom 1994: 61)
 I can express the belief that there is a cat behind that wall with a sentence in some natural language but I am also able to use the same sentence to attribute this belief to others.
 His subsequent, very detailed, analysis of subsentential expressions is necessarily decompositional rather than compositional – analyzing down rather than building up from simpler semantic components.
The point of attributions of belief or desire, for example, is to determine what an agent is committed entitled “to say or do”. Likewise, the point of affixing truth values to beliefs or statements is to assess or endorse their propriety within the game of giving and asking for reasons. Is the claimant entitled to assert that p? Are the inferential consequences of p that they acknowledge the actual consequences? (17, 542).
So for a rule with infinite application, it is not necessary for the rule user to have all the triggering instances “before his mind” to have grasped how to perform in any of these instances
Martin and Heil 1998 and Hohwy present a good case for holding that dispositions can avoid Kripkensteinean skeptical conclusions if construed realistically rather than in terms of statements about counterfactual behaviour.
 “Looking at the practices a little more closely involves cashing out the talk of deontic statuses by translating it into talk of deontic attitudes. Practitioners take or treat themselves and others as having various commitments and entitlements. They keep score on deontic statuses by attributing those statuses to others and undertaking them themselves. The significance of a performance is the difference it makes in the deontic score-that is, the way in which it changes what commitments and entitlements the practitioners, including the performer, attribute to each other and acquire, acknowledge, or undertake themselves.” (Brandom 1994: 166).
Kevin has provided a typically engaging gloss on the difference between posthumanism and transhumanism over at the IEET site. I don’t fundamentally disagree with his account of transhumanism (though I think he needs to emphasize its fundamentally normative character) but the account of posthumanism he gives here has some shortcomings:
Two significant differences between transhumanism and the posthuman is the posthuman’s focus on information and systems theories (cybernetics), and the posthuman’s consequent, primary relationship to digital technology; and also the posthuman’s emphasis on systems (such as humans) as distributed entities—that is, as systems comprised of, and entangled with, other systems. Transhumanism does not emphasize either of these things.
Posthumanism derives from the posthuman because the latter represents the death of the humanist subject: the qualities that make up that subject depend on a privileged position as a special, stand-alone entity that possesses unique characteristics that make it exceptional in the universe—characteristics such as unique and superior intellect to all other creatures, or a natural right to freedoms that do not accrue similarly to other animals. If the focus is on information as the essence of all intelligent systems, and materials and bodies are merely substrates that carry the all-important information of life, then there is no meaningful difference between humans and intelligent machines—or any other kind of intelligent system, such as animals.
Now, I realize we can spin definitions to different ends; but even allowing for our different research aims, this won’t do. Posthumanists may, but need not, claim that humans are becoming more intertwined with technology. They may, but need not, claim that functions, relations or systems are more ontologically basic than intrinsic properties. Many arch-humanists are functionalists, holists or relationists (I Kant, R Brandom, D Davidson, G Hegel . . .) and one can agree that human subjectivity is constitutively technological (A Clark) without denying its distinctive moral or epistemological status. Reducing stuff to relations can be a way of emphasizing the transcendentally constitutive status of the human subject, taking anthropocentrism to the max (see below). Emphasizing the externality or contingency of relations can be a way of arguing that things are fundamentally independent of that constitutive activity (as in Harman’s OOO or DeLanda’s assemblage ontology).
So I raise Kevin’s thumbnails with a few of my own.
- A philosopher is a humanist if she believes that humans are importantly distinct from non-humans and supports this distinctiveness claim with a philosophical anthropology: an account of the central features of human existence and their relations to similarly general aspects of nonhuman existence.
- A humanist philosophy is anthropocentric if it accords humans a superlative status that all or most nonhumans lack
- Transhumanists claims that technological enhancement of human capacities is a desirable aim (all other things being equal). So the normative content of transhumanism is largely humanist. Transhumanists just hope to add some new ways of cultivating human values to the old unreliables of education and politics.
- Posthumanists reject anthropocentrism. So philosophical realists, deconstructionists, new materialists, Cthulhu cultists and naturalists are posthumanists even if they are unlikely to crop up on one another’s Christmas lists.