In this excellent presentation Saxe claims that Transcranial Magnetic Simulation applied to the temporo-parietal junction (TPJ) – a region specialized for mentalizing in human adults – can improve the effectiveness of moral reasoning by improving our capacity to understand other human minds.
This suggests an interesting conundrum for moral philosophers working in the Kantian tradition, where recognizing the rationality and personhood of offenders is held to be a sine qua non for justifications of punishment. We can imagine a Philip K Dick style world in which miscreants are equipped with surgically implanted TMS devices which zap them where an automated surveillance system judges them to be in a morally tricky situation calling for rapid and reliable judgements about others’ mental states. Assuming that such devices would be effective, would this still constitute a violation of the offender’s personhood – treating the offender as a refractory animal who must be conditioned to behave in conformity with societal norms, like Alex in a Clockwork Orange ? Or would the enhancement give that status its due by helping the offender become a better deliberator ?
Assuming the TMS devices could achieve their aim of improving moral cognition, it seems odd to say that this would be a case of “tiger training” which bypasses the offender’s capacity for moral reasoning since it would presumably increase that very capacity. It is even conceivable that an effective moral enhancement could be co-opted by savvy Lex Luthor types to enhance the criminal capacities of their roughnecks, making them more effective at manipulating others and sizing up complex situations. At the same time, it would be quite different from punishment practices that appeal to the rational capacities of the offender. Having one’s TPJ zapped is not the same as being asked to understand the POV of your victim – though it might enhance your ability to do so.
So an effective moral enhancement that increases the capacity for moral reasoning in the cognitively challenged would neither be a violation of nor an appeal to to their reason. It would not be like education or a talking therapy, but neither would be like the cruder forms of chemical or psychological manipulation. It could enhance the moral capacities of people but it would do so by tying them into technical networks that, as we know, can be co-opted for ends that their creators never anticipated. It might enhance the capacity for moral agency while also increasing its dependence on the vagaries of wider technical systems. Some would no doubt see such a development as posthuman biopower at its most insidious. They would be right, I think, but technology is insidious precisely because our florid agency depends on a passivity before cultural and technical networks that extend it without expressing a self-present and original human subjectivity.
There’s an epic flame war over at Three Pound Brain in response to Scott Bakker’s discussion of Levi Bryant’s Object Oriented Ontology. I’m sitting this one out like my hero Custard the Cat. In part because, I’m just too busy and in part cos’ I don’t want to distract Scott from the trudge to Golgotterath and the moral necessity of euthanizing our immortal souls.
Brother Cavil’s speech from BSG’s episode ‘No Exit‘ is the plaint of a being whose morphological freedom has been arbitrarily denied. Cavil’s romantic transhumanism is far more cogent and appealing, here, than Ellen Tigh’s feeble humanism.
Well, to be fair, it probably isn’t, but, on the strength of this post over at Larval Subjects, Bryant might just believe that it is. The idea seems to be that representation depends on dynamic and fluid interactions between objects, thus either representation is not what we thought it was (reductionism) or there ain’t such a thing (eliminativism).
Here’s a rough attempt at formalization.
Assumption: Representational/semantic (RS) properties are static.
1) Agency properties (the behaviour of agents) are non-static (i.e. dynamic, fluid, etc.).
2) RS properties supervene (depend on) on Agency properties.
3) All supervenient properties have the same higher-order properties as their subvenient properties.
Conclusion: By (3) RS properties are non-static (contrary to the assumption)
However, this is an unsound argument because 3) is patently false. Supervenient properties don’t get all their higher order properties from their base of subvenient properties. Aesthetic properties plausibly supervene on physical properties (if two things are physically identical, they are aesthetically identical) but physical properties are quantifiable whereas aesthetic properties are not.
So for the argument to work we need to assert either identity between dynamic agency properties and representational/semantic ones (reductionism) at 3, so we can get to the conclusion via the indiscernability of identicals, or eliminativism (there are no RS properties).
So if this argument supports OOO. OOO is committed to reductionism or eliminativism.
To put this argument into context of Levi’s homeostat example:
I’m not contesting the OOO claim regarding the epistemic impenetrability of objects or its claim regarding the non-representational character of our access to them.
However, the considerations adduced in Levi’s post here only establish that our access to objects is non-representational if we make extremely deflationary assumptions about the relationship between knowledge and the dynamic processes in cybernetic systems since the mere dependence of representation on dynamics does not suffice to press his claim.
For example, even if all states of an information processing system S are responsive to changing outputs of objects in S’s world, it doesn’t follow that some of those states are not also responsive to internal states of those objects. S might be armed by a hypothesis-forming device: say, a feedforward neural network with the input layer corresponding to the sensory input from the object’s outputs, while the ‘hidden’ layer might flip outputs into one state if the object being tracked is moving in a phototropic way and into another if it is behaving photophobically. If these behaviors are caused by internal states of the object, then the S could track persistent and causally determinative internal states of the object. In terms of the state space of the system the phototropic/photophobic difference would correspond to a partition of that total space by the hidden layer.
If the hidden layer state merely replicates the dynamically changing input or responds randomly (as would be the case in a network prior to training) then this presumably won’t be the case.
So if we identify knowledge states with fluidly changing states recording the passing scene, we get the reductive result that all we can know is the passing scene. We could also get to a similar position if we simply reject the claim that some objects – like homeostats – have internal states with causal roles (input-output conditions). I suppose OOO fans have to commit to some such but this doesn’t follow from anything known about cybernetic systems unless this knowledge excludes the possible of hypothesis-generating mechanisms and merely considers the raw input from sensory transducers – which is not the case.