Roden, David. 2015a. “Aliens Under the Skin: Serial Killing and the Seduction of Our Common Inhumanity”, in Serial Killing: A Philosophical Anthology, Edia Connole & Gary J. Shipley (eds). Schism Press.
Phenomenology is, as I have argued elsewhere, striated with “darkness” – experiencing it only affords a partial and very fallible insight into its nature. We are not normally aware of this darkness because, as Scott Bakker writes, it “provides no information about the absence of information.” However, this opacity can be exhibited from a third-person perspective in cases of “anosognosia” – conditions where patients are unable to access the fact that they have some sensorimotor deficit, such as blindness, deafness or the inability to move a limb. Sufferers from Anton’s syndrome or “blindness denial,” for example, are blind as a result of damage to visual areas in the brain. But, when questioned they deny that they are blind and attempt to act as if they were not.  This shows not only that the people can be radically mistaken about the contents of their conscious experience but that a standard Cartesian impossibility claim – that we cannot make a perceptual judgment without having a corresponding perception – is false. Minds assumed impossible on the basis of armchair reasoning turn out to be quite possible
The blindness of the mind to its true nature is also exhibited among unimpaired agents. We regularly assume that we are authoritative about the reasons for our choices. Yet studies into the phenomenon of “choice blindness” by Petter Johansson and Lars Hall suggest that humans can be gulled into attributing reasons to themselves that they did not have. In one case, subjects in a supermarket were asked to rate jams and teas, following which they were apparently presented with samples of the tea or jam they had chosen earlier and asked to explain their choice. In manipulated trials the samples were sneakily switched with samples of different products. Remarkably, less than a half the experimental participants noticed the switch, despite striking differences between the substituted pairs of flavours. The remainder sought retrospective justifications for choices they had not made.
Lars and Hall have been able to exhibit choice blindness in moral reasoning. In another experiment, subjects were asked to rate their agreement with controversial moral claims in a survey form. Unbeknownst to the experimental subjects, the pages with the original rated statements were switched for subtly altered sentences expressing contrary moral claims. However, when asked to review and discuss their rating, a majority of experimental subjects confabulated reasons for moral positions opposing the ones that had earlier embraced.
Phenomena such as choice blindness and anosognosia suggest that our insight into subjectivity depends on a fallible process of self-interpretation that is subjectively “transparent” and immediate only because we are not aware that it is a process at all. Thomas Metzinger calls this constraint “autoepistemic closure.” By virtue of it, the vivid world “out there” and our vital, rich “inner” life appear not to be models or interpretations only because we are not aware of concocting them.
Metzinger argues that phenomenology is systematically misleading about what phenomenology really is because it needs to be. A system that modeled itself and attempted to model that modeling process in turn (and so on) would require infinite representational resources. Phenomenological darkness thus prevents the self-interpreter from becoming entangled “in endless internal loops of higher-order self-modeling.” It is thus reasonable to argue that the anti-reductionist intuition that subjective experience is inexplicable in terms of non-subjective physical or computational processes is an artifact of this phenomenological darkness.
 David Roden, “Nature’s Dark Domain: an Argument for a Naturalised Phenomenology.” Royal Institute Of Philosophy Supplement 72 (2013), 169-188.
 R. Scott Bakker. “Back to Square One: Towards a Post-Intentional Future”. http://scientiasalon.wordpress.com/2014/11/05/back-to-square-one-toward-a-post-intentional-future (accessed January 8, 2015).
.Thomas Metzinger, Being No One: The Self-Model Theory of Subjectivity (Cambridge, MA: MIT Press 2004), 429-436.
 Lars Hall, Petter Johansson, and David de Léon. “Recomposing the will: Distributed motivation and computer-mediated Extrospection,” in Decomposing the Will, 298-324 (New York, NY, US: Oxford University Press, 2013), 303-4.
 Metzinger, Being-No-One, 57.
 Metzinger, Being-No-One, 338.
 Metzinger, Being-No-One, 436.
This is a sketch of a partial value theory that I’ve been developing while completing my book Posthuman Life. If there are similar theories out there, I’d be grateful for links to bibdata so that I can properly acknowledge them!
In order to construct an anthropologically unbounded account of posthumans, we need a psychology-free account of value. There may, after all, be many possible posthuman psychologies but we don’t know about any of them to date. However, the theory requires posthumans to be autonomous systems of a special kind: Functionally Autonomous Systems (see below). I understand “autonomy” here as a biological capacity for active self maintenance. The idea of a system which intervenes in the boundary conditions required for its existence can be used to formulate an Autonomous Systems Account of function which avoids some of the metaphysical problems associated with the more standard etiological theory. The version of ASA developed by Wayne Christensen and Mark Bickhard defines the functions of an entity in terms of its contribution to the persistence of an autonomous system, which they conceive as a group of interdependent processes (Christensen and Bickhard 2002: 3). Functions are process dependence relations within actively self-maintaining systems.
Ecological values are constituted by functions. The conception, in turn, allows us to formulate an account of “enlistment” which then allows us to define what it is to be an FAS.
1) (ASA) Each autonomous system has functions belonging to it at some point in its history. Its functions are the interdependent processes it requires to remain autonomous at that point.
2) (Value) If a process, thing or state is required for a function to occur, then that thing or process is a value for that function. Any entity, state or resource can be a value. For example, the proper functioning of a function can be a value for the functions that require it to work.
3) (Enlistment) When an autonomous system produces a function, then any value of that function is enlisted by that system.
4) (Accrual) An FAS actively accrues functions by producing functions that are also values for other FAS’s.
5) (Functional Autonomy) A functionally autonomous system (FAS) is any autonomous system that can enlist values and accrue functions.
People are presumably FAS’s on this account, but also nonhuman organisms and (perhaps) lineages of organisms. Likewise, social systems (Collier and Hooker 2009) and (conceivably) posthumans. To date, technical entities are not FAS’s because they are non-autonomous. Historical technologies are mechanisms of enlistment, however. For example. Without mining technology, certain ores would not be values for human activities. Social entities, such as corporations, are autonomous in the relevant and sense and thus can have functions (process interdependency relations) and constitute values of their own. However, while not-narrowly human, current social systems are wide humans not posthumans. As per the Disconnection Thesis: Posthumans would be FAS’s no longer belonging to WH (the Wide Human socio-technical assemblage – See Roden 2012).
This is an ecological account in the strict sense of specifying values in terms of environmental relations between functions and their prerequisites (though “environment” should be interpreted broadly to include endogenous and well as exogenous entities or states). It is also an objective rather than subjective account which has no truck with the spirit (meaning, culture or subjectivity, etc.). Value are just things which enter into constitutive relations with functions (Definition 2 could be expanded and qualified by introducing degrees of dependency). Oxygen was an ecological value for aerobic organisms long before Lavoisier. We can be ignorant of our values and mistake non-values for values, etc. It is also arguable that some ecological values are pathological in that they support some functions while hindering others.
The theory is partial because it only provides a sufficient condition for value. Some values – Opera, cigarettes, incest prohibitions and sunsets – are arguably things of the spirit, constituted as values by desires or cultural meanings.
Christensen, W. D., and M. H. Bickhard. 2002. “The Process Dynamics of Normative Function.” The Monist 85 (1): 3–28.
Collier, J. D., & Hooker, C. A. 1999. Complexly organised dynamical systems. Open Systems & Information Dynamics, 6(3): 241-302.
Roden. 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart.Springer Frontiers Collection.
I’ve just been listening to Ray Brassier’s presentation on Nick Land’s work at the recent Accelerationism conference at Goldsmiths, University of London with an appropriately night-black, supercharging Lavazza in hand. Here, Ray patiently anatomizes tensions within Land’s ‘thanatropic’ politics. This advocates intensifying the deracinating power of Capital to generate pure, unbound intensities beyond the scope of human phenomenology or representation. This is anti-personnel, overkill leftism with a grisly Terminator affix on its multi-segmented carapace. More intense than a sub-dermal Lavazza, then, and, in the wake of 80’s/90’s cyberpunk masterpieces like William Gibson’s Neuromancer and Bruce Sterling’s Schizmatrix, it made it seem as if philosophy could be compiled in an altogether new machine code .
Problem: death – as pure uncanceled intensity – is not only beyond any phenomenology (human or otherwise) but, arguably, there is no such thing. There are processes which – in Manuel DeLanda’s terms – involve the successive cancellation of intensive differences (energetic or chemical gradients, say) and these are important drivers in the morphogenesis of ‘things’: cellular boundaries, the partition of self-organizing maps, etc. But such processes are quantifiable empirical particulars, not the shark-like denizens of the noumenal depths we might imagine if we took our cue from Freud’s steampunk ontology. Perhaps what has gone wrong here is what goes wrong with humanism when it is read exclusively in transcendental terms. If there is some necessary limiting structure to human experience (linear temporal order, embodiment, intentionality, whatever) then it becomes conceptually possible to speculate about its theological, intensive or posthuman excision. If we adopt a flat ontological approach which abjures such schematizing structures, then the excision of the human cannot be understood through conceptual analysis at all since there is no a priori anthropology to subvert or negate. As I argued in a recent paper in the Journal of Evolution and Technology: “we cannot exclude, a priori, the possibility of a posthuman alterity. We can only preclude an a priori conception of what that possibility entails.” The posthuman point of excision from Capitalism is not a semantic void or empty signifier or a pure noumenon, but unrepresentable only in advance of its empirical actualization. Understanding the excision of the human (or Capital) is thus a matter of productive engagement with the world, but there is no reason – beyond a misplaced obsession with post-Kantian dualisms or steampunk – to eschew the guidance of theory in bringing this about.
The term ‘flat ontology’ was coined by Manuel DeLanda in his book Intensive Science and Virtual Philosophy. Flat ontologies are opposed there to hierarchical ontologies in which the structure and evolution of reality is explained by transcendent organizing principles such as essences, organizing categories or natural states:
[While] an ontology based on relations between general types and particular instances is hierarchical, each level representing a different ontological category (organism, species, genera), an approach in terms of interacting parts and emergent wholes leads to a flat ontology, one made exclusively of unique, singular individuals, differing in spatio-temporal scale but not in ontological status (DeLanda 2004, p. 58).
In a flat ontology the organization of entities is explained with reference to interactions between particular, historically locatable entities. It is never the result of entities of one ontological kind being related to an utterly different order of being like a God, a transcendental subject, a natural state or its associated species essences (Sober 1980). For flat ontologies, the factors which motivate macro-level change are always emergent from and ‘immanent’ to the systems in which the change occurs.
DeLanda’s characterization of flat ontology comes during a discussion of the ontological status of species in which he sides with philosophers of biology like David Hull and Elliot Sober who hold that species are differentiated populations that emerge from variations among organisms and the evolutionary feedback processes these drive (DeLanda 2004, 60). For DeLanda, evolutionary feedback instances a universal tendency for identifiable things and their properties to emerge from intensive or (or productive) differences such as variations in heritable adaptive differences or chemical concentrations (Ibid., 58-9; 70). Thus the formation of soap bubbles depends on the tendency of component molecules to assume a lower a state of free energy, minimizing inter-molecular distances and cancelling the forces exerted on individual molecules by their neighbors (Ibid., 15). The process instantiates an abstract tendency for near-equilibrium systems with free energy to ‘roll down’ to a macrostate attractor. Thus for DeLanda’s ontology (following Deleuze) individuals are not products of the operations of a Kantian/Husserlian transcendental subject but of the cancellation of intensive differences and the generative processes they drive. These processes are governed by mathematical structures – e.g. ‘virtual’ attractors or ‘singularities’ – which are ‘quasi-causal’ influences on their trajectory through a particular state space (Ibid., 14).
How do we reconcile this second ontological claim (which I will refer to as ‘transcendental materialism’) with an adherence to a flat ontology of individuals. Is ontological flatness merely a regional principle applying to the ‘bits’ of the universe where differentiated particulars have already emerged from intensive processes, rendering their generative mechanisms irrelevant to understanding or categorizing the entities they have become? Moreover, if these processes are explained in terms of the virtual structures they exhibit, such as their singularities, doesn’t TM just reintroduce an ontological hierarchy between particular and universal?*
Graham Harman argues that the quasi-causal role of the abstract or virtual in DeLanda’s thought vitiates its commitment to a flat ontology for which “atoms have no more reality than grain markets or sports franchises” (Harman 2008, 370). Thus while depriving species and kinds of any distinctive organizing role, DeLanda inflates the role of the ‘genus’ in the form of virtual patterns (such as the relationship between the topology of systems and their capacity for autocatalysis explored of Stuart Kauffman and others). Secondly, subordinating individuals to their historical generative processes is seen by Harman as a way of ‘undermining’ the status of the particular or individual, which – against the letter of flat ontology – is somehow less real or effective than the intensive processes that produce it.
I think Harman does contemporary philosophers a favour by anatomizing these tensions within DeLanda’s materialism. However, it is far from clear to me that the regulative ideal of ontological flatness necessitates an ontology in which deep individuals and their (largely non-manifest) capacities play the central organizing role. It may be that the generative histories of particulars are relevant only insofar as they leave “lasting fingerprints” on the particulars they generate, making DeLanda’s proposal that we categorize particulars by way of the generative processes that produce them potentially problematic in some cases (Ibid.,374; DeLanda 2004, 50). However, if DeLanda’s (and Deleuze’s) transcendental materialism is correct, then any entity generated as a result of these processes will always be – as Iain Grant emphasizes – a fragile achievement, fatally involved in the play of further intensities (for example, at certain temperature thresholds, the lipid layers dividing biological cells from their watery milieu will simply melt, their ‘cohesion’ as individuals breaks down). The question of typing by generative process is thus an empirical matter of the causal relevance of such processes to the maintenance of individuals at all scales.
There is no reason why flat ontologies have to be individualist or object-oriented. The concept of the ‘individual’ and the wider category of the ‘particular’ are often conflated. The latter category may contain events, ‘diffusions’ or collectives: each of which may be insufficiently differentiated to qualify for objecthood (Roden 2004, p. 204). The cancellation of intensive quantities can certainly be accommodated within the category of particular events without threatening flatness (whether this is an orthodox Deleuzean solution doesn’t concern me). Secondly, insofar as the virtual laws of form which DeLanda describes reflect the mathematical structure of morphogenetic processes or systems, then their ontological autonomy need not violate the autonomy of the particular. Rather, morphogenetic structures reflect substrate neutral or formal constraints on the behavior of material systems whose effects are entirely produced by those systems. Quasi-causes do not preempt causes proper but reflect structural similarities between systems with otherwise distinct components.
For example, Stuart Kaufmann has used computer simulation of so called ‘NK Boolean Netoworks’ to argue that the capacity of systems of mutually interacting parts to generate stable auto-catalytic cycles is sensitive to the number of inter-connections between those parts. If the number of connections is large (that is, if the number of connections K to a given component approximates to the number of components N) the system behaves in a random, disordered way. However, for smaller values of K (e.g. K=2) the system settles down to exploring a relatively small number of ‘attractor’ sequences. Kaufmann speculates that this relationship is substrate-neutral – independent of nature of the system components (they could be nodes in an NK boolean simulation or chemical substances in a solution).
So a provisional conclusion, here, is that we can retain the role of structural ‘quasi-causes’ and reject the primacy of individuals without compromising the regulative ideal of ontological flatness.
DeLanda, Manuel. (2004), Intensive Science and Virtual Philosophy. London: Continuum.
___(2006), A New Philosophy of Society. London: Continuum.
Harman, Graham (2008), ‘Delanda’s Ontology: assemblage and realism’, Continental Philosophy Review 41, 367-383.
Roden, David. (2004), ‘Radical Quotation and Real Repetition’, Ratio: An international journal of analytic philosophy, XVII/2 (2004), pp. 191–206.
Sober, Elliot (1980) ‘Evolution, Population Thinking and Essentialism’, Philosophy of Science 47(3), pp. 350-383.
*We could also ask: is the cancellation of intensive difference merely a regional principle applying to various kinds of thermodynamic systems rather than, say, to more fundamental physical entities or structures?