The Robo Menace to our Morals

On February 5, 2015, in Uncategorized, by enemyin1


Eric Schwitzgebel has a typically clear-eyed, challenging post on the implications of (real) artificial intelligence for our moral systems over here at the Splintered Mind. The take home idea is that our moral systems (consequentialist, deontologistical, virtue-ethical, whatever) are adapted for creatures like us. The weird artificial agents that might result from future iterations of AI technology might be so strange that human moral systems would simply not apply to them.

Scott Bakker follows this argument through in his excellent Artificial Intelligence as Socio-Cognitive Pollution , arguing that blowback from such posthuman encounters might literally vitiate those moral systems, rendering them inapplicable even to us. As he puts it:

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development thatraises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines.

As any reader of Posthuman Life, might expect, I think Erich and Scott are asking all the right questions here.

Some (not me) might object that our conception of a rational agent is maximally substrate neutral. It’s the idea of a creature we can only understand “voluminously” by treating it as responsive to reasons. According to some (Davidson/Brandom) this requires the agent to be social and linguistic – placing such serious constraints on “posthuman possibility space” as to render his discourse moot.
Even if we demur on this, it could be argued that the idea of a rational subject as such gives us a moral handle on any agent – no matter how grotesque or squishy. This seems true of the genus “utility monster”. We can acknowledge that UM’s have goods and that consequentialism allows us to cavil about the merits of sacrificing our welfare for them. Likewise, agents with nebulous boundaries will still be agents and, so the story goes, rational subjects whose ideas of the good can be addressed by any other rational subject.
So according to this Kantian/interpretationist line, there is a universal moral framework that can grok any conceivable agent, even if we have to settle details about specific values via radical interpretation or telepathy. And this just flows from the idea of a rational being.
I think the Kantian/interpretationist response is wrong-headed. But showing why is pretty hard. A line of attack I pursue concedes to Brandom-Davidson that that we have the craft to understand the agents we know about. But we have no non-normative understanding of the conditions something must satisfy to be an interpreting intentional system or an apt subject of interpretation (beyond commonplaces like heads not being full of sawdust).
So all we are left with is a suite of interpretative tricks whose limits of applicability are unknown. Far from being a transcendental condition on agency as such, it’s just a hack that might work for posthumans or aliens, or might not.
And if this is right, then there is no a future-proof moral framework for dealing with feral Robots, Cthulhoid Monsters or the like. Following First Contact, we would be forced to revise our frameworks in ways that we cannot possible have a handle on now. Posthuman ethics must proceed by way of experiment.
Or they might eat our brainz first.



Tagged with:

robot pencilIn “The Basic AI Drives” Steve Omohundro has argued there is scope for predicting the goals of post-singularity entities able to modify their own software and hardware to improve their intellects. For example, systems that can alter their software or physical structure would have an incentive to make modifications that would help them achieve their goals more effectively as have humans have done over historical time. A concomitant of this, he argues, is that such beings would want to ensure that such improvements do not threaten their current goals:

So how can it ensure that future self-modifications will accomplish its current objectives? For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit (Omohundro 2008).

I think this assumption of ethical self-transparency is interestingly problematic. Here’s why:

Omohundro requires that there could be internal systems states of post-singularity AI’s whose value content could be legible for the system’s internal probes. Obviously, this assumes that the properties of a piece of hardware or software can determine the content of the system states that it orchestrates independently of the external environment in which the system is located. This property of non-environmental determination is known as “local supervenience” in the philosophy of mind literature. If local supervenience for value-content fails, any inner state could signify different values in different environments. “Clamping” machine states to current values would entail restrictions on the situations in which the system could operate as well as on possible self-modifications.

Local supervenience might well not hold for system values. But let’s assume that it does. The problem for Omohundro is that the relevant inner determining properties are liable to be holistic. The intrinsic shape or colour of an icon representing a station on a metro map is arbitrary. There is nothing about a circle or a squire or the colour blue that signifies “station”. It is only the conformity between the relations between the icons and the stations in metro system it represents which does this (Churchland’s 2012 account of the meaning of prototype vectors in neural networks utilizes this analogy).

The moral of this is that once we disregard system-environment relations, the only properties liable to anchor the content of a system state are its relations to other states of the system. Thus the meaning of an internal state s under some configuration of the system must depend on some inner context (like a cortical map) where s is related to lots of other states of a similar kind (Fodor and Lepore 1992).

But relationships between states of the self-modifying AI systems are assumed to be extremely plastic because each system will have an excellent model of its own hardware and software and the power to modify them (call this “hyperplasticicity”). If these relationships are modifiable then any given state could exist in alternative configurations. These states might function like homonyms within or between languages, having very different meanings in different contexts.

Suppose that some hyperplastic AI needs to ensure a state in one of its its value circuits, s, retains the value it has under the machine’s current configuration: v*. To do this it must avoid altering itself in ways that would lead to s being in an inner context in which it meant some other value (v*) or no value at all. It must clamp itself to those contexts to avoid s assuming v** or v***, etc.

To achieve clamping, though, it needs to select possible configurations of itself in which s is paired with a context c that preserves its meaning.

The problem for the AI is that all [s + c] pairings are yet more internal systems states and any system state might assume different meanings in different contexts. To ensure that s means v* in context c it needs to do to have done to some [s + c] what it had been attempting with s – restrict itself to the supplementary contexts in which [s + c] leads to s having v* as a value and not something else.

Now, a hyperplastic machine will always be in a position to modify any configuration that it finds itself in (for good or ill). So this problem will be replicated for any combination of states [s + c . . . +  . . ..] that the machine could assume within its configuration space. Each of these states will have to be repeatable in yet other contexts, etc. Since concatenation of system states is a system state to which the principle of contextual variability applies, there is no final system state for which this issue does not arise.

Clamping any arbitrary s requires that we have already clamped some undefined set of contexts for s and this condition applies inductively for all system states. So when Omohundro envisages a machine scanning its internal states to explicate their values he seems to be proposing an infinite task has already completed by a being with vast but presumably still finite computational resource.


Block, Ned (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy 10 (1):615-78.

Churchland, Paul. 2012. Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. MIT Press (MA).

Omohundro, S. M. 2008. “The basic AI drives”. Frontiers in Artificial Intelligence and applications171, 483.



Tagged with:

Virnor Vinge Interview

On April 20, 2011, in Uncategorized, by enemyin1


Socrates (AKA Nikola Danaylov) has a rare interview with mathematician, science fiction writer and speculative futurist Virnor Vinge. Vinge articulates the difference between the Singularity and previous technical change thus: You could explain the internet or intercontinental jet travel to someone from an earlier phase of technological history Mark Twain or Ghengis. Explaining the post-singularity dispensation to a ‘human’ human would be like explaining typewriters to a goldfish.

Vinge reveals that he’s dusting off the sequel to his sublime posthuman space opera, A Fire upon the Deep. It’s called Children of the Sky.

Tagged with:

Stop Dave, I’m Afraid.

On March 24, 2011, in Uncategorized, by enemyin1


Here’s a link to an intriguing blog post and paper by Professor of Law at the Brookings Institute, James Boyle on the implications for prospective developments in AI and biotechnology for our legal conceptions of personhood. The paper opens by considering the challenges posed by prospects of Turing-capable artificial intelligences and genetic chimera.