250px-Nyarlathotep

Anti-reductionist physicalists or materialists deny that psychology can be theoretically reduced to physics but allow physics sovereignty concerning what exists. Anti-reductionist arguments vary but a common line of attack against reductionism is that psychology expresses rational or normative relationships between mental states; not causal or functional relationships of the kind expressed in theories of natural science. Thus in Sellars “Two Images” account physics and natural science tells us what exists but humans still encounter themselves in a normatively structured “space of reasons”. Donald Davidson refers to his own version of this position as “anomalous monism” (AM):

“Anomalous monism resembles materialism in its claim that all events are physical, but rejects the thesis, usually considered essential to materialism, that mental phenomena can be given purely physical explanations. Anomalous monism shows an ontological bias only in that it allows the possibility that not all events are mental, while insisting that all events are physical” (Davidson 2001: 214)

Davidson’s account seeks to reconcile three claims that appear to be in tension: 1) that mental events causally interact with physical events; 2) that causal relations occur only where the events in question are covered by strict deterministic laws; 3) “that there are no strict deterministic laws on the basis of which mental events can be predicted and explained (the Anomalism of the Mental).”

Davidson aims to do this by arguing from the claim that the existence of causal relationships between events only implies that there is some true description of the relationship expressing a strict nomic relationship. The reconciliation is possible because causal relations obtain between token singular events while laws are linguistically expressed generalisations. Mental events can be causally related to one other or to non-mental events.

But, according to Davidson, causality is nomological only in that where two events are causally related, they have linguistic descriptions that express a law. It does not follow that “that every true singular statement of causality instantiates a law” (215). Thus a statement like “Helen’s belief that Justin was murdered was caused by her seeing blood in the kitchen” adverts to a law like relationship between a token of blood in the kitchen and a token belief about murder but does not state it. The law-like relationship, for Davidson, would have to be expressed in terms of the states and dynamics of a physical system which allowed a deterministic inference about a future state – her belief token – again rendered in some physicalistic idiom.

Claim 3) Follows, Davidson thinks, if mental states are those addressed in propositional attitude ascriptions and that such ascriptions depend holistically on overall assessments of the rationality and cognizance of agents in their world. In the space of reasons, where propositional attitudes are ascribed to persons, it is always possible to revise attributions in the interests of overall cogency. There can be no single translation scheme that pre-empts all the evidence that could be relevant to such ascriptions (222-223). Thus whereas the theories in which physical regularities are stated must be closed to allow the formulation of exceptionless laws (homonomic) the language of propositional attitude ascription is necessarily open to multiple idioms or “heteronomic” (219):

“The heteronomic character of general statements linking the mental and the physical traces back to this central role of translation in the description of all prepositional attitudes, and to the indeterminacy of translation. There are no strict psychophysical laws because of the disparate commitments of the mental and physical schemes. It is a feature of physical reality that physical change can be explained by laws that connect it with other changes and conditions physically described. It is a feature of the mental that the attribution of mental phenomena must be responsible to the background of reasons, beliefs, and intentions of the individual.”(222)

In Nagelian terms, it would be impossible to formulate true bridge laws between a reducing theory in some physical idiom and a reduced psychological theory because the intentional side the biconditional could always be revised in the light of holistic considerations irrelevant to the “physical side”. Thus type-type psychophysical reduction appears impossible. Note that an analogous result is obtainable if we view the space of reasons as structured by implicit norms irreducible to behavioral regularities.

Of course, not all accounts of reduction require bridge laws between reduced and reducing theories, or treat theories as interpreted sets of sentences. It is still open to the reductionist to argue for a different form of reduction (Bickle 1993: 222-4). It is also open to the reductionist to argue that psychology is not peculiar in being inexpressible “as sets of generalizations” – this being true of all scientific theories (226) – or in being open to extra-theoretical idioms in which to describe their contexts of application to real systems. Maybe no theory (physical or otherwise) is truly heteronomic.

However, in the argument that follows I will suppose that Davidson’s anomalism is right, or, at least, that his account can be rectified in a form that is proof against neoreductionist assaults.

So let us assume that the psychological perspective in which agents have beliefs and desires and utter meaningful statements is conceptually irreducible (as Sellarsians say) to the scientific image of the world as a causal-physical system.

If so, then the possibility of a certain form of technological descendant of current humans (posthumans) implies that intentional psychology will be instrumentally if not theoretically eliminated.

That is, whatever its current value for humans, it could not play a similar role for the relevant class of posthuman. And this not because of any logical or ontological vices but because of it would be incapable of functioning as an idiom for interpretation and understanding among these hypothetical successors. So the anti-reductionist argument against theoretical reduction/elimination supports a metaphysical case for instrumental elimination.

The hypothetical entities in question are what I refer to in Posthuman Life and elsewhere as “hyperplastic agents”. An agent is hyperplastic if it can make arbitrarily fine changes to any part of its functional or physical structure without compromising either its agency or its capacity for hyperplasticity. For example, suppose a hyperplastic agent dislikes some unpleasant memories associated with the taste of milk. Whereas a merely plastic agent like ourselves might need hours of cognitive behavioral therapy to excise these, the hyperplastic simply needs to locate the neuronal ensembles and pathways associated with these memories and ensure that they are no longer linked in such a way that the memory of milk causes them to activate in turn.

Likewise, a hyperplastic would be in a position to alter any other informational or value-relevant state by physically altering the relevant brain states. Obviously, use the term “brain” broadly here to refer to those systems within the hyperplastic that are associated with “cognition”, “perception” or the “control of behaviour” in some intuitive sense of these terms. The original inspiration for the idea of the hyperplastic came from Steve Omohundro’s speculations about the goal structures of generally intelligent robots in his essay “The Basic AI Drives” (2008). We need not assume that the “brain” in question is a known biological system.

Davidson’s anti-reductionism implies token physicalism (each event that can be brought under a psychological description is identical to some physical event, since ontological physicalism is taken as a given).

So for any state in an agent with a psychological description there will be physical description of that state. For any such state there will interventions that the agent can make into the state which will produce a physically distinct successor state such that the former psychological description will no longer be true of it.

Now we can suppose that any hyperplastic agent will have an Agenda at a particular time. That is, it will not tinker with its internal states arbitrarily but wish to do so in ways that don’t kill it, do not undermine its capacity for hyperplasticity and that fulfill whatever desiderata are listed on the Agenda.

The interesting question (assuming Davidsonian anti-reductionism) is how the Agenda can be formulated. Can it be expressed in psychological terms (roughly, in terms of propositional attitudes or values)? If it is expressed in psychological terms, then anti-reductionism implies that for any Agent intervention at the physical level, it will not be possible to reliably infer the psychological outcome of the alteration.

This follows simply because there are no psychophysical laws. Moreover even rough generalisations over past interventions would not be much help. These might be reliable for merely plastic creatures whose basic design and structure remain fairly constant over time. But a hyperplastic agent is protean. Thus it cannot assume that the rough and ready psychophysical generalisations that have held over one phase of its existence will extend into another phase.

It follows that however a hyperplastic agent frames the Agenda it cannot be psychologically expressible because no reliable inferences can be drawn from future physical form to future psychology.

So if hyperplastics have Agenda’s, they would have to represent states that could be reliably inferred from facts about their physical constitution at a given time. But given Davidson’s anti-reductionism, they would have little use for psychological self-description for making generalisations about their current or future actions. Suppose a hyperplastic Agent self-attributes a belief b. A merely plastic agent like you or me might assume generalisations along the lines of “I will continue to hold b unless I find evidence from which some contrary of b can be inferred”. But a hyperplastic agent would not be able to assume such generalisations because there could be no evidence that an auto-intervention would not cause it to lose b regardless of the evidence in its favour.

So a hyperplastic agent could not use propositional attitude psychology to predict its own behaviour. Folk psychology would be equally impotent for predicting the behaviour of its fellow hyperplastics for the same reason.

If hyperplastic agents could exist and plan their self-interventions, they would have to employ an entirely different idiom to understand themselves or one another. A posthuman-making disconnection that resulted in the emergence of hyperplastics would inevitably to result in the instrumental elimination of folk psychological capacities among the population of hyperplastics, at least; since neither the capacity nor the linguistic idiom for attributing propositional attitudes would have predictive or hermeneutic utility.

This means that were humans to encounter hyperplastics, they would not be radically interpretable (in Davidson’s sense) because radical interpretation depends on the principle of charity and this, again, is framed in folk psychological terms.

I conclude that if hyperplastic agents are possible, we could not understand them without abandoning the conceptual framework we currently use to understand ourselves and our conspecifics. They would be radically uninterpretable.

 

References

Bickle, John (1992). Mental anomaly and the new mind-brain reductionism. Philosophy of Science 59 (2):217-30.

Davidson, D. 1984. Inquiries into Truth and Interpretation. Oxford: Clarendon Press.

Davidson, Donald (2001). Essays on Actions and Events, Vol. 1. Oxford: Oxford University Press.

Omohundro, S. M. (2008). “The Basic AI Drives”. Frontiers in Artificial Intelligence and Applications 171: 483

 

 

 

 

Tagged with:
 

Conversations On TechNoBody

On March 24, 2015, in Uncategorized, by enemyin1

A series of interviews discussing the recent TechnoBody exhibition.

Part of Anti-Utopias’ digital art series.

Tagged with:
 

Pete Furniss improvising with C-C-Combine

On March 24, 2015, in Uncategorized, by enemyin1

Ajkad Csupa Vér – Pete Furniss, clarinet & live electronics from furnerino on Vimeo.

Live improvisation by clarinettist Pete Furniss using C-C-Combine – a concatenative synthesis patch built by Rodrigo Constanzo in Max MSP. On his website, Rodrigo explains that concatenative synthesis is a form of granular synthesis employing modulation via sound sources rather than prescribed parameters (grain density, jitter, wave form, etc.)  to determine how the sound grains (short samples) are played back.

Pete will be a keynote performer at the Philosophy of human+computer music 2 Workshop at Sheffield University on May 27th (Where I will also be chairing a discussion session). In last year’s workshop, some extremely stimulating discussions of computer music aesthetics were informed by input from performers and experts on the electroacoustic coalface. The second iteration is not to be missed!

 

 

 

Tagged with:
 

 

Ray Brassier’s  “Unfree Improvisation/Compulsive Freedom” (written for the 2013 event at Glasgow’s Tramway Freedom is a Constant Struggle) is a terse but insightful discussion of the notion of freedom in improvisation.

It begins with a polemic against the voluntarist conception of freedom. The voluntarist understands free action as the uncaused expression of a “sovereign self”. Brassier rejects this supernaturalist understanding of freedom. He argues that we should view freedom not as determination of an act from outside the causal order, but as the self-determination of action within the causal order.

According to Brassier, this structure is reflexive. It requires, first of all, a system that acts in conformity to rules but is capable of representing and modifying these rules with implications for its future behaviour. Insofar as there is a “subject” of freedom, then, it is not a “self” but depersonalized acts generated by systems capable of representing and intervening in the patterns that govern them.

The act is the only subject. It remains faceless. But it can only be triggered under very specific circumstances. Acknowledgement of the rule generates the condition for deviating from or failing to act in accordance with the rule that constitutes subjectivity. This acknowledgement is triggered by the relevant recognitional mechanism; it requires no appeal to the awareness of a conscious self….

Brassier’s proximate inspiration for this model of freedom is Wilfred Sellars’ account of linguistic action in “Some Reflections on Language Games” (1954) and the psychological nominalism in which it is embedded. This distinguishes a basic rule-conforming level from a metalinguistic level in which it is possible to examine the virtues of claims, inferences or the referential scope of terms by semantic ascent: “Intentionality is primarily a property of candid public speech established via the development of metalinguistic resources that allows a community of speakers to talk about talk” (Brassier 2013b: 105; Sellars 1954: 226).

So, for Brassier, the capacity to explore the space of possibilities opened up by rules presupposes a capacity to acknowledge these sources of agency.

There are some difficult foundational questions that could be raised here. Is thought really instituted by linguistic rules or is language an expression of pre-linguistic intentional contents? Are these rules idiomatic (in the manner of Davidson’s passing theories) or communal? What is the relationship between the normative dimension of speech and thought and facts about what thinkers do or are disposed to do?

I’ve addressed these elsewhere, so I won’t belabor them here. My immediate interest, rather, is the extent to which Brassier’s account of act-reflexivity is applicable to musical improvisation.

Brassier does not provide a detailed account of its musical application in “Unfree Improvisation”. What he does write, though, is highly suggestive: implying that the act of free improvisation requires some kind of encounter between rule governed rationality and more idiomatic patterns or causes:

The ideal of “free improvisation” is paradoxical: in order for improvisation to be free in the requisite sense, it must be a self-determining act, but this requires the involution of a series of mechanisms. It is this involutive process that is the agent of the act—one that is not necessarily human. It should not be confused for the improviser’s self, which is rather the greatest obstacle to the emergence of the act.

In (genuinely) free improvisation, it seems, determinants of action become “for themselves” They enter into the performance situation as explicit possibilities for action.

This seems to demand that “neurobiological or socioeconomic determinants of musical or non-musical action can become musical material, to be manipulated or altered by performers. How is this possible?

Moreover, is there something about improvisation (as opposed to conventional composition) that is peculiarly apt for generating the compulsive freedom of which Brassier speaks?

After all, his description of the determinants of action in the context of improvisation might apply to the situation of the composer as well. The composer of notated “art music” or the studio musician editing files in a digital-audio workstation seems better placed than the improviser to reflect on and develop her musical rule-conforming behaviour (e.g. exploratory improvisations) than the improviser. She has the ambit to explore the permutations of a melodic or rhythmic fragment or to eliminate sonic or gestural nuances that are, in hindsight, unproductive. The composed gesture is always open to reversal or editing and thus to further refinement.

Thus the improviser seems committed to what Andy Hamilton calls an “aesthetic of imperfection” – in contrast to the musical perfectionism that privileges the realized work. Hamilton claims that the aesthetics of perfection implies and is implied by a Platonic account for which the work is only contingently associated with particular times, places or musical performers (Hamilton 2000: 172). The aesthetics of imperfection, by contrast, celebrates the genesis of a performance and the embodying of the performer in a specific time and space:

Improvisation makes the performer alive in the moment; it brings one to a state of alertness, even what Ian Carr in his biography of Keith Jarrett has called the ‘state of grace’. This state is enhanced in a group situation of interactive empathy. But all players, except those in a large orchestra, have choices inviting spontaneity at the point of performance. These begin with the room in which they are playing, its humidity and temperature, who they are playing with, and so on. (183)

An improvisation consists of irreversible acts that cannot be compositionally refined. They can only be repeated, developed or overwritten in time. It takes place in a time window limited by the memory and attention of the improviser, responding to her own playing, to the other players, or (as Brassier recognises) to the real-time behaviour of machines such as effects processors or midi-filters. Thus the aesthetic importance of the improvising situation seems to depend on a temporality and spatiality that distinguishes it from the score-bound composition or studio bound music production.

Yet, if this is right, it might appear to commit Brassier to a vitalist or phenomenological conception of the lived musical experience foreign to the anti-vitalist, anti-phenomenological tenor of his wider philosophical oeuvre. For this open, processual time must be counter-posed to the Platonic or structuralist ideal of the perfectionist. The imperfection and open indeterminacy of performance time must have ontological weight and insistence if Brassier’s programmatic remarks are to have any pertinence to improvisation as opposed to traditional composition.

This is not intended to be a criticism of Brassier’s position but an attempt at clarification. This commitment to an embodied, historical, machinic and physical temporality seems implicit in the continuation of the earlier passage cited from his text:

The improviser must be prepared to act as an agent—in the sense in which one acts as a covert operative—on behalf of whatever mechanisms are capable of effecting the acceleration or confrontation required for releasing the act. The latter arises at the point of intrication between rules and patterns, reasons and causes. It is the key that unlocks the mystery of how objectivity generates subjectivity. The subject as agent of the act is the point of involution at which objectivity determines its own determination: agency is a second-order process whereby neurobiological or socioeconomic determinants (for example) generate their own determination. In this sense, recognizing the un-freedom of voluntary activity is the gateway to compulsive freedom.

The improvising subject, then, is a process in which diverse processes are translated into a musical event or text that retains an expressive trace of its historical antecedents. As Brassier emphasizes, this process need not be understood in terms of human phenomenological time constrained by the “reverbations” of our working memory (Metzinger 2004: 129) – although this may continue to be the case in practice.

The Derridean connotations of the conjunction “event”/”text”/”trace” are deliberate, since the time of the improvising event is singular and productive – open to multiple repetitions that determine it in different ways. Improvisation is usually constrained (if not musically, by time or technical skill or means) but these rarely constitute rules or norms in the conventional sense. There is no single way in which to develop a simple Lydian phase on a saxophone, a rhythmic cell, or sample (an audio sample could be filtered, reversed or mangled by reading its entries out of order with a non-standard function, rather than the usual ramp). So the time of improvisation is a peculiarly naked exposure to “things”. Not to a sensory or categorical given, but precisely to an absence of a given that can be technologically remade.

References:

Brassier, Ray 2013a. “Unfree Improvisation/Compulsive Freedom”, http://www.mattin.org/essays/unfree_improvisation-compulsive_freedom.html (Accessed March 2015)

Brassier, Ray. 2013b. “Nominalism, Naturalism, and Materialism: Sellars’ Critical Ontology”. In Bana Bashour & Hans D. Muller (eds.), Contemporary Philosophical Naturalism and its Implications. Routledge. 101-114.

Davidon, Donald. 1986. “A Nice Derangement of Epitaphs”. In Truth and Interpretation,

E. LePore (ed.), 433–46. Oxford: Blackwell.

Hamilton, A. (2000). “The art of Improvisation and the Aesthetics of Imperfection”. British Journal of Aesthetics 40 (1):168-185.

Metzinger, T. 2004. Being No One: The Self-Model Theory of Subjectivity. Cambridge, MA: MIT Press.

Sellars, W. 1954. “Some Reflections on Language Games”. Philosophy of Science 21 (3):204-228.

 

Tagged with:
 

Note on Cronenberg and Ontological Masochism

On February 22, 2015, in Uncategorized, by enemyin1

My first “Cronenberg” was Videodrome (1983). I saw the film with some university friends in a London in jitters about an ongoing IRA mainland bombing campaign.  I think there was a bomb scare in progress when we left our West End cinema, but that this made little impact on me. I was in a state of aesthetic paralysis. I couldn’t pack a judgment about whether it was “well made” or cinematically successful . My (even then, fragile) hold on good sense and taste were overwritten by a cinematic logic indifferent to such nicieties. Its permutations of violence, death and desire should have been familiar to me from the works of Ballard. But Cronenberg’s visceral exploration of the boundaries between eroticism and death, flesh and technology had no precedent for me. Its eroticism – personified in Debbie Harry’s character, the self-destructively masochistic, Nicki Brand – was less disturbing than what, with Leo Bersani, we might call its “ontological masochism”. Cronenberg’s film systematically erodes boundaries between flesh, reality and desire, and expects us to take pleasure in our loss of a world.

Even now, there are few artists with a keener eye for the fragility of ontological boundaries. Later sonata-form works such as The Fly and Dead Ringers showed that he could explore these themes with a lightness and rigor that only Ballard could match. But Videodrome recycles its image-noise, jacks into your brain and turns up the feedback. It is uterly indifferent to sentiment or to its own status as cinematic art, and is all the better for it.

The themes of technology, desire, matter and art are explored in the Virtual Musem of Canada’s rich and fascinating online exhibition devoted to Cronenberg’s work.

Bersani, Leo (1986) The Freudian Body: Psychoanalysis and Art. Oxford: ColumbiaUniversity Press.

 

 

 

 

I’ll be destruction testing my paper on Brandom and Posthumanism at the Open University in Milton Keynes, Wednesday 4 March at 2 pm in MR05 Wilson A 1st Floor. The piece is forthcoming in PHILOSOPHY AFTER NATURE Rosi Braidotti and Rick Dolphijn (eds.).

Here’s the abstract:

BRANDOM AND POSTHUMAN AGENCY: AN ANTI-NORMATIVIST RESPONSE TO BOUNDED POSTHUMANISM

By David Roden, Open University

I distinguish two theses regarding technological successors to current humans (posthumans): an anthropologically bounded posthumanism (ABP) and an anthropologically unbounded posthumanism (AUP). ABP proposes transcendental conditions on agency that can be held to constrain the scope for “weirdness” in the space of possible posthumans a priori. AUP, by contrast, leaves the nature of posthuman agency to be settled empirically (or technologically). Given AUP there are no “future proof” constraints on the strangeness of posthuman agents.

In Posthuman Life I defended AUP via a critique of Donald Davidson’s work on intentionality and a “naturalistic deconstruction” of transcendental phenomenology (See also Roden 2013). In this paper I extend this critique to Robert Brandom’s account of the relationship between normativity and intentionality in Making It Explicit (MIE) and in other writings.

Brandom’s account understands intentionality in terms of the capacity to undertake and ascribe inferential normative commitments. It makes “first class agency” dependent on the ability to participate in discursive social practices. It implies that posthumans – insofar as they qualify as agents at all – would need to be social and discursive beings.

The problem with this approach, I will argue, is that it replicates a problem that Brandom discerns in Dennett’s intentional stance approach. It tells us nothing about the conditions under which a being qualifies as a potential interpreter and thus little about the conditions for meaning, understanding or agency.

I support this diagnosis by showing that Brandom cannot explain how a non-sapient community could bootstrap itself into sapience by setting up a basic deontic scorekeeping system without appealing (along with Davidson and Dennett) to the ways in which an idealized observer would interpret their activity.

This strongly suggests that interpretationist and pragmatist accounts cannot explain the semantic or the intentional without regressing to assumptions about ideal interpreters or background practices whose scope they are incapable of delimiting. It follows that Anthropologically Unbounded Posthumanism is not seriously challenged by the claim that agency and meaning are “constituted” by social practices.

AUP implies that we can infer no claims about the denizens of “Posthuman Possibility Space” a priori, by reflecting on the pragmatic transcendental conditions for semantic content. We thus have no reason to suppose that posthuman agents would have to be subjects of discourse or, indeed, members of communities. The scope for posthuman weirdness can be determined by recourse to engineering alone.

References:

Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scientific and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.

Roden, David. 2013. “Nature’s Dark Domain: An Argument for a Naturalised Phenomenology”. Royal Institute of Philosophy Supplements 72: 169–88.

Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.

 

Tagged with:
 

Art and Posthumanism

On February 7, 2015, in Uncategorized, by enemyin1


This is an abstract for a presentation that I will be giving in a roundtable discussion on posthumanism and aesthetics with Debra Benita Shaw and Stefan Sorgner at the University of East London on May 18 2015. Further details will be made available.

Posthumanism can be critical or speculative. These positions converge in opposing human-centred (anthropocentric) thinking. However, their rejection of anthropocentricism applies to different areas. Critical Posthumanism (CP) rejects the anthropocentrism of modern philosophy and intellectual life; Speculative Posthumanism (SP) opposes human-centric thinking about the long-run implications of modern technology.

CP is interested in the posthuman as a cultural and political condition. Speculative Posthumanists propose the metaphysical possibility of technologically created nonhuman agents. SP states: there could be posthumans – where posthumans would be “wide human descendants” of current humans that have become nonhuman in virtue of some process of technical alteration.

In Posthuman Life I elaborate a detailed version of SP. Specially, I describe what it is to become posthuman in terms of “the disconnection thesis” [DT] (Roden 2012; 2014, Chapter 5). DT understands “becoming posthuman” in abstract terms. Roughly, it states that an agent becomes posthuman iff. it becomes independent of the human socio-technical system as a consequence of technical change. It does not specify how this might occur or the nature of the relevant agents (e.g. whether they are immortal uploads, cyborgs, feral robots or Jupiter sized Brains).

Posthuman Life argues that the abstractness of DT is epistemologically apt because there are no posthumans and thus we are in no position to deduce constraints on their possible natures or values (I refer to this position as “anthropologically unbounded posthumanism” [AUP)). AUP has implications for the ethics of becoming posthuman that are generally neglected in the literature on transhumanism and human enhancement.

The most important of these is that there can be no a priori ethics of posthumanity. Becoming posthuman can only be substantively (as opposed to abstractly) understood by making posthumans or becoming posthuman. I argue that, given the principled impossibility of a prescriptive ethics here, we must formulate strategies for speculating on and exploring nearby “posthuman possibility space”.

In this paper, I propose that aesthetic theory and practice may be a useful political model for such technological self-fashioning because it involves styles of thought or creation that discover their constraints and values by producing them. This “production model” is, I will argue, the only one liable to serve us if, with CP/SP, we reject an anthropocentric privileging of the human. I finish by considering some examples of aesthetic practice that might provide models for the politics of making posthumans or becoming posthuman.

 

References:

Roden, David. 2012. “The Disconnection Thesis”. In The Singularity Hypothesis: A Scientific and Philosophical Assessment, A. Eden, J. Søraker, J. Moor & E. Steinhart (eds), 281–98. London: Springer.

Roden, David. 2014. Posthuman Life: Philosophy at the Edge of the Human. London: Routledge.

 

 

The Robo Menace to our Morals

On February 5, 2015, in Uncategorized, by enemyin1

 

Eric Schwitzgebel has a typically clear-eyed, challenging post on the implications of (real) artificial intelligence for our moral systems over here at the Splintered Mind. The take home idea is that our moral systems (consequentialist, deontologistical, virtue-ethical, whatever) are adapted for creatures like us. The weird artificial agents that might result from future iterations of AI technology might be so strange that human moral systems would simply not apply to them.

Scott Bakker follows this argument through in his excellent Artificial Intelligence as Socio-Cognitive Pollution , arguing that blowback from such posthuman encounters might literally vitiate those moral systems, rendering them inapplicable even to us. As he puts it:

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development thatraises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines.

As any reader of Posthuman Life, might expect, I think Erich and Scott are asking all the right questions here.

Some (not me) might object that our conception of a rational agent is maximally substrate neutral. It’s the idea of a creature we can only understand “voluminously” by treating it as responsive to reasons. According to some (Davidson/Brandom) this requires the agent to be social and linguistic – placing such serious constraints on “posthuman possibility space” as to render his discourse moot.
Even if we demur on this, it could be argued that the idea of a rational subject as such gives us a moral handle on any agent – no matter how grotesque or squishy. This seems true of the genus “utility monster”. We can acknowledge that UM’s have goods and that consequentialism allows us to cavil about the merits of sacrificing our welfare for them. Likewise, agents with nebulous boundaries will still be agents and, so the story goes, rational subjects whose ideas of the good can be addressed by any other rational subject.
So according to this Kantian/interpretationist line, there is a universal moral framework that can grok any conceivable agent, even if we have to settle details about specific values via radical interpretation or telepathy. And this just flows from the idea of a rational being.
I think the Kantian/interpretationist response is wrong-headed. But showing why is pretty hard. A line of attack I pursue concedes to Brandom-Davidson that that we have the craft to understand the agents we know about. But we have no non-normative understanding of the conditions something must satisfy to be an interpreting intentional system or an apt subject of interpretation (beyond commonplaces like heads not being full of sawdust).
So all we are left with is a suite of interpretative tricks whose limits of applicability are unknown. Far from being a transcendental condition on agency as such, it’s just a hack that might work for posthumans or aliens, or might not.
And if this is right, then there is no a future-proof moral framework for dealing with feral Robots, Cthulhoid Monsters or the like. Following First Contact, we would be forced to revise our frameworks in ways that we cannot possible have a handle on now. Posthuman ethics must proceed by way of experiment.
Or they might eat our brainz first.

 

 

Tagged with:
 

Scott on the End Times

On November 25, 2014, in Uncategorized, by enemyin1

There’s a lively debate around Scott Bakker’s recent lecture: “The End of the World As We Know It: Neuroscience and the Semantic Apocalypse” given at The University of Western Ontario’s Centre for the Study of Theory and Criticism here at Speculative Heresy.  The text includes responses from Nick Srnicek and Ali McMillan.