This issue looks like essential reading for anyone interested in the foundations of computationalism in Cog Sci and Phil Mind.

Donald Davidson

On April 25, 2011, in Uncategorized, by enemyin1


[Here’s an unpublished dictionary entry on Donald Davidson (from 2004). A modified version is to be found in the 2005 Edinburgh Dictionary of Continental Philosophy (John Protevi ed.)]

Donald Davidson (b. 1917, d. 2003), American philosopher whose essays on language, mind and knowledge extend Quine’s attack on the reification of meaning and epistemological foundationalism. In ‘Mental Events’ (1970) he also propounded an influential form of non-reductive materialism. Drawing, here, on his theory of linguistic interpretation, he argues that beliefs, desires and actions are only ascribable on the assumption that they are interrelated in a largely reasonable and consistent manner. Since psychology, unlike physics, is governed by norms of rationality, strict psychophysical laws relating physical states and contentful mental states are impossible. This is compatible with each mental state being a ‘token’ of a physical type: hence Davidson’s characterisation of his position as ‘anomalous monism’. ‘Actions, Reasons and Causes’ (1963) argues on similar grounds that while rendering action intelligible from an agent’s point of view, explanatory reasons must also be causally responsible for behaviour.

Davidson’s philosophy of language addresses philosophical constraints on theories of linguistic understanding. ‘Truth and Meaning’ (1967) argues that knowledge of the truth conditions of assertions suffices for understanding them and that Tarski’s account of truth in formalised languages shows how a semantic theory could eschew generalised conceptions of meaning or linguistic representation. Subsequent essays develop a metatheory of radical interpretation, specifying how interpreters could test whether a truth theory is interpretative for an uninterpreted language. The criterion of empirical success here is that a theory correctly predicts circumstances of utterance for arbitrary sentences of a language. This implies semantic holism: that the meaning of a term reflects its place in the totality of linguistic behaviour. Such behaviour counts as evidence only if, applying the ‘principle of charity’, speakers are assumed to have largely true beliefs. ‘The Structure and Content of Truth’ (1990) argues that interpretation presupposes a grasp of truth irreducible to theoretical notions like correspondence or coherence. ‘A Nice Derangement of Epitaphs’ (1986) employs literary cases of unconventional speech such as Sheridan’s Mrs Malaprop to argue that the notion of a common language has little explanatory role in semantics – a line of reasoning comparable to Derrida’s use of the notion of iterability to deconstruct philosophical appeals to convention or shared practice.

Like Quine, Davidson holds that interpretations are underdetermined by considerations of charity. There are, in consequence, no ‘deeper’ facts that could allow an interpreter to decide between competing interpretative theories explaining the same speech behaviour. Charity and the resultant ‘semantic indeterminacy’ have broader epistemological import. ‘In ‘On the Very Idea of a Conceptual Scheme’ (1974) they are deployed against empiricist and transcendentalist pictures of mind ‘forming’ the world from unconceptualised ‘content’ which he takes to underlie relativism and strong incommensurability claims. Strong Kantian parallels can be found, however, in ‘Thought and Talk’ (1975) and ‘Rational Animals’ (1982) where it is argued that only creatures possessing a concept of belief can have beliefs and that an understanding of objectivity emerges in the intersubjective context of linguistic interpretation. ‘The Myth of the Subjective’ (1987) argues that the mental content is only fixed under charitable interpretations of an agent’s activity within a common world, thereby undermining Cartesian-style appeals to intrinsically contentful ‘Ideas’ as a basis for philosophical reflection.

Tagged with:

Interview with Max More

On November 18, 2010, in Uncategorized, by enemyin1

James Hughes interviews transhumanist philosopher Max More in this podcast from Changesurfer Radio. More provides a critique of technological determinist accounts of the singularity and advocates the adoption of a proactionary principle in opposition to the conservative terror reflex dignified under the rubric of ‘the precautionary principle’.

Tagged with:

The Posthuman Aporia for Transhumanism I

On April 29, 2010, in Uncategorized, by enemyin1

Contemporary transhumanists argue that humans can be technologically re-engineered to free them from limitations which have hampered their life chances throughout history: ageing, disease, restricted cognitive capacities, underdeveloped social virtues, scarcity-based economic rationing.

This ethic is premised on prospective developments the so-called ‘NBIC’ suite of technologies – Nanotechnology, Biotechnology, Information Technology, Cognitive Science – supplying the means to make the requisite modifications. In the area of human cognitive enhancement, drugs like Amphetamine and Modafinil are already used to increase the efficiency of learning and working memory (Bostrom and Sandberg 2006). More speculatively, micro-electric neuroprostheses’ might eventually be used to interface the brain directly with non-biological cognitive or robotic systems (Kurzweil 2005, 317). Such developments might bring forward the day in which all humans will be more intellectually (and physically) capable due to enhancements in their native biological machinery or through interfacing with supplemental cognitive technologies such as immersive virtual realities or artificial intelligences.

Some believe that a convergence of NBIC technologies will not only increase intelligence within the current human range but, beyond a critical point, contribute to a discontinuously rapid change in nature of mentation on this planet. Virnor Vinge refers to this point as ‘the technological singularity’ (Vinge 1993). Vinge, along with Ray Kurzweil and Hans Moravec, argues that were a single super-intelligent machine created it could create still more intelligent machines, resulting in a recursive growth in cognitive capacity to levels that (lacking this capacity) we cannot imagine.

Since such a situation is unprecedented, the best we can do to understand the post-singularity dispensation, Vinge claims, is to draw parallels with the emergence of an earlier transformative intelligence: ‘And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind.’ (Vinge 1993) If this analogy between the emergence of the human and the emergence of the post-human holds, we could no more expect to understand a post-singularity entity than a rat or non-human primate – lacking the capacity for refined propositional attitudes – could be expected to understand human conceptions like justice, number theory or public transportation.

Vinge’s position nicely exemplifies a generic posthumanist philosophy which I will refer to as ‘speculative posthumanism’.

Speculative posthumanists claim that descendants of current humans could cease to be human by virtue of a history of technical alteration. The notion of descent is ‘wide’ insofar as the entities that might qualify could include our biological descendants or beings resulting from purely technological activities (e.g. artificial intelligences, synthetic life-forms or uploaded minds)

In his presentation at Humanity+ UK 2010 philosopher and futurist Max More recently used the metaphor of an ‘event horizon’ to convey this vision of our wide-descendants receding beyond the point at which the sense or meaning of their lives could be accessible to an observer situated outside the singularity. He expressed some scepticism about this prospect, suggesting that ‘our’ capacity to understand recursively generated technological change would increase so as to ford the incommensurability. However, this assumes that the singularity would be evenly distributed (not everybody may elect to be augmented or be capable of it) and that the development of cognition would be smooth rather than discontinuous (involving transliteration into radically different formats), unitary rather than multiple (why not many singularities?). Moreover, there remains the philosophical problem of how us mildly augmented primates are to envisage the prospect of being radically transcended by entities that, for want of a better word, we must describe as ‘posthuman’.

Speculative posthumanism claims that an augmentation history of this kind is metaphysically and technically possible. It does not imply that the posthuman would improve upon the human state or that there would be a scale of values by which the human and posthuman lives could be compared – for example, in terms of their levels of happiness, autonomy or virtue. If radically posthuman lives were very non-human indeed, we cannot assume they will be prospectively evaluable.

Speculative posthumanism is thus logically independent of transhumanism and (besides) generates an interesting ethical ‘aporia’ (an ancient Greek term denoting a contradiction or impossibility). After all, the kinds of policies with which transhumanists hope to reengineer our cognitive architecture (widespread use of cognitive enhancements, the development of Artificial General Intelligences (AGI’s), brain-computer interfaces (BCI’s) and synthetic biology) may also be precursors to singularities. Since transhumanists want to improve the human lot they are bound to assess the risks as well as the ethical benefits of pursuing any specific technology. On the other hand, it is ex hypothesi impossible to prospectively evaluate the conditions of life beyond the event horizon of a technological singularity. Hence the aporia: the conceivability of the event horizon implies that the transhumanist is bound to evaluate that which in principle transcends anthropocentric philosophical values such as utility, welfare, autonomy, virtue or flourishing.

Bostrom N, Sandberg A (2006), ‘Converging Cognitive Enhancements’, Ann. N.Y. Acad. Sci. 1093: 201–227.

Kurzweil, Ray (2005), The Singularity is Near (New York Viking).

Vinge, Vernor (1993), ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’, Accessed 24 April 2008.

Tagged with: