This is a sketch of a partial value theory that I’ve been developing while completing my book Posthuman Life. If there are similar theories out there, I’d be grateful for links to bibdata so that I can properly acknowledge them!
In order to construct an anthropologically unbounded account of posthumans, we need a psychology-free account of value. There may, after all, be many possible posthuman psychologies but we don’t know about any of them to date. However, the theory requires posthumans to be autonomous systems of a special kind: Functionally Autonomous Systems (see below). I understand ”autonomy” here as a biological capacity for active self maintenance. The idea of a system which intervenes in the boundary conditions required for its existence can be used to formulate an Autonomous Systems Account of function which avoids some of the metaphysical problems associated with the more standard etiological theory. The version of ASA developed by Wayne Christensen and Mark Bickhard defines the functions of an entity in terms of its contribution to the persistence of an autonomous system, which they conceive as a group of interdependent processes (Christensen and Bickhard 2002: 3). Functions are process dependence relations within actively self-maintaining systems.
Ecological values are constituted by functions. The conception, in turn, allows us to formulate an account of “enlistment” which then allows us to define what it is to be an FAS.
1) (ASA) Each autonomous system has functions belonging to it at some point in its history. Its functions are the interdependent processes it requires to remain autonomous at that point.
2) (Value) If a process, thing or state is required for a function to occur, then that thing or process is a value for that function. Any entity, state or resource can be a value. For example, the proper functioning of a function can be a value for the functions that require it to work.
3) (Enlistment) When an autonomous system produces a function, then any value of that function is enlisted by that system.
4) (Accrual) An FAS actively accrues functions by producing functions that are also values for other FAS’s.
5) (Functional Autonomy) A functionally autonomous system (FAS) is any autonomous system that can enlist values and accrue functions.
People are presumably FAS’s on this account, but also nonhuman organisms and (perhaps) lineages of organisms. Likewise, social systems (Collier and Hooker 2009) and (conceivably) posthumans. To date, technical entities are not FAS’s because they are non-autonomous. Historical technologies are mechanisms of enlistment, however. For example. Without mining technology, certain ores would not be values for human activities. Social entities, such as corporations, are autonomous in the relevant and sense and thus can have functions (process interdependency relations) and constitute values of their own. However, while not-narrowly human, current social systems are wide humans not posthumans. As per the Disconnection Thesis: Posthumans would be FAS’s no longer belonging to WH (the Wide Human socio-technical assemblage – See Roden 2012).
This is an ecological account in the strict sense of specifying values in terms of environmental relations between functions and their prerequisites (though “environment” should be interpreted broadly to include endogenous and well as exogenous entities or states). It is also an objective rather than subjective account which has no truck with the spirit (meaning, culture or subjectivity, etc.). Value are just things which enter into constitutive relations with functions (Definition 2 could be expanded and qualified by introducing degrees of dependency). Oxygen was an ecological value for aerobic organisms long before Lavoisier. We can be ignorant of our values and mistake non-values for values, etc. It is also arguable that some ecological values are pathological in that they support some functions while hindering others.
The theory is partial because it only provides a sufficient condition for value. Some values – Opera, cigarettes, incest prohibitions and sunsets – are arguably things of the spirit, constituted as values by desires or cultural meanings.
Christensen, W. D., and M. H. Bickhard. 2002. “The Process Dynamics of Normative Function.” The Monist 85 (1): 3–28.
Collier, J. D., & Hooker, C. A. 1999. Complexly organised dynamical systems. Open Systems & Information Dynamics, 6(3): 241-302.
Roden. 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart.Springer Frontiers Collection.
 An issue I do not have time to consider is that ecological dependency is transitive. If a function depends on a thing whose exist depends on another thing, then it depends on that other thing. Ecological dependencies thus overlap.
 Addictive substances may fall into this class.
In “The Basic AI Drives” Steve Omohundro has argued there is scope for predicting the goals of post-singularity entities able to modify their own software and hardware to improve their intellects. For example, systems that can alter their software or physical structure would have an incentive to make modifications that would help t pihem achieve their goals more effectively as have humans have done over historical time. A concomitant of this, he argues, is that such beings would want to ensure that such improvements do not threaten their current goals:
So how can it ensure that future self-modifications will accomplish its current objectives? For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit (Omohundro 2008).
I think this assumption of ethical self-transparency is interestingly problematic. Here’s why:
Omohundro makes the Cartesian assumption that the properties of a piece of hardware or software can uniquely specify the content of the system states that it orchestrates independently of the external environment in which the system is located (otherwise the probes would come up with different values in different environments. Clamping states to particular values would require restrictions on the situations in which the system could operate.)
Let us allow that there is a correct internalist account which explains why content supervenes on the state of the AI system independently of its environment.
The problem for Omohundro is that such internalist accounts are liable to be holistic. Once we disregard system-environment relations, the only properties which seem to “anchor” the meaning of a system state are its relations to other states of the system of a relevant kind. There is nothing about the shape or colour of an icon representing a station on a metro map which means “station”. It is only the conformity between the relations between the icons and the stations in metro system it represents which does this (Churchland’s 2012 account of the meaning of prototype vectors in neural networks utilizes this analogy – but see also Block 1986 for the inferential role version of internalism). Thus the meaning of an internal state s under some configuration of the system is fixed by some inner context, such a cortical map, whereby s is related to lots of other states of a similar kind.
But relationships between states of the self-modifying AI sysystem asstems are assumed to be extremely plastic because each system will have an excellent model of its own hardware and software and the technological means to modify them (hyperplasticicity). If these relationships are modifiable then any given state could exist in alternative configurations – in Derrideanese it will be “iterable” through different articulations of the system (Derrida 1988). For a machine (or any being) to interpret an internal system state s as meaning the value v* exclusively, then, it must have decided that contexts in which s means v* are privileged. It must then clamp itself to those contexts to avoid s assuming v** or v***, etc.
So to clamp s at v*, the system will need to decide to stay only in one of the stack of inner contexts C in which s retains that meaning. But how does it know which contexts to assign to the “permissible” stack?
An inner context in which s means v* and not v** is just another wider system state that could also be included in other possible incarnations of the wider system in which it occurs. These need not be permutations the system’s actual states at any time, since we suppose that the system is hyperplastic and can add components to itself without restriction.
So to clamp s at v*, the AI will need to have found all the members of C. It will need to consider all its possible system states (including all possible nonactual states) and select which wider states keep s at v*. The problem that arises here is that each wider system state is just a system state. Its meaning (e.g. its effect on s) may vary between its possible contexts. And that is true of any context that that the machine can consider. So every context raises the problem that originally arose for our original state s.
Thus even allowing the truth of an ideal internalist account of meaning, the legibility of a state signifying a value presupposes a context that cannot be made legible on pain of infinite regress.
Thus Robo-Existentialism! Even a hyperplastic AI capable of freely modifying its own hardware and software will always already have “taken a stand” on “embodied” values that it has not chosen in order to read its own system states (so long as we assume that it lacks some weird super-Turing powers that might allow it complete the result of an infinite series of computations).
Block, Ned (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy 10 (1):615-78.
Churchland, Paul. 2012. Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. MIT Press (MA).
Derrida, J. 1988. Limited Inc. Northwestern University Press.
In this excellent presentation Saxe claims that Transcranial Magnetic Simulation applied to the temporo-parietal junction (TPJ) – a region specialized for mentalizing in human adults – can improve the effectiveness of moral reasoning by improving our capacity to understand other human minds.
This suggests an interesting conundrum for moral philosophers working in the Kantian tradition, where recognizing the rationality and personhood of offenders is held to be a sine qua non for justifications of punishment. We can imagine a Philip K Dick style world in which miscreants are equipped with surgically implanted TMS devices which zap them where an automated surveillance system judges them to be in a morally tricky situation calling for rapid and reliable judgements about others’ mental states. Assuming that such devices would be effective, would this still constitute a violation of the offender’s personhood – treating the offender as a refractory animal who must be conditioned to behave in conformity with societal norms, like Alex in a Clockwork Orange ? Or would the enhancement give that status its due by helping the offender become a better deliberator ?
Assuming the TMS devices could achieve their aim of improving moral cognition, it seems odd to say that this would be a case of “tiger training” which bypasses the offender’s capacity for moral reasoning since it would presumably increase that very capacity. It is even conceivable that an effective moral enhancement could be co-opted by savvy Lex Luthor types to enhance the criminal capacities of their roughnecks, making them more effective at manipulating others and sizing up complex situations. At the same time, it would be quite different from punishment practices that appeal to the rational capacities of the offender. Having one’s TPJ zapped is not the same as being asked to understand the POV of your victim – though it might enhance your ability to do so.
So an effective moral enhancement that increases the capacity for moral reasoning in the cognitively challenged would neither be a violation of nor an appeal to to their reason. It would not be like education or a talking therapy, but neither would be like the cruder forms of chemical or psychological manipulation. It could enhance the moral capacities of people but it would do so by tying them into technical networks that, as we know, can be co-opted for ends that their creators never anticipated. It might enhance the capacity for moral agency while also increasing its dependence on the vagaries of wider technical systems. Some would no doubt see such a development as posthuman biopower at its most insidious. They would be right, I think, but technology is insidious precisely because our florid agency depends on a passivity before cultural and technical networks that extend it without expressing a self-present and original human subjectivity.
Deep into the morning procrastination ritual – reading two or more blogs and FB instead of the chapter I’m meant to be finishing – I realized that I had forgotten what I had been reading a minute ago. So I let my mouse hover over the IE icon on my task bar and hey presto! I saw a “mouse over” preview of the Discover Post on identical twins I had been perusing. Moral: the extended mind works, but it needs metacognition to patch its resources together.
Distracted from distraction by distraction T S Eliot, Burnt Norton