Autonomy and Modularity

On February 14, 2013, in Uncategorized, by enemyin1

Truthy-elections-diffusion-network-300x251

 

Autonomous systems of the kind that we can conceive as emerging from our technology are liable to be modular assemblages of elements that can couple opportunistically with other entities or systems, creating new assemblages whose powers and dispositions are transformed and dynamically put into play by such couplings.

The best way of representing modularity is in terms of networks consisting of nodes and their interconnections. A network is modular if it contains “highly interconnected clusters of nodes that are sparsely connected to nodes in other clusters” (Clunes, Mouret and Lipson 2013, 1). In autonomous assemblages modules support functional processes that make a distinct and specialized contribution to maintaining the conditions necessary for other interdependent processes within the assemblage.

Modules may or may not be spatially localized entities. They may be relatively fragmented while exhibiting dynamical cohesion. An instance of a software object class such as an “array” (an indexed list of objects of a single type) need not be instantiated on continuous regions on a computer’s physical memory. It does not matter where the data representing the array’s contents is stored is physically located so long as the more complex program which it composes can locate that data when it needs it. Thus while it is possible that all assemblages must have some spatially bounded parts – organelles in eukaryotic cells and distributors in internal combustion engines come in spatially bounded packages, for example – not all functionally discrete parts of assemblages need be spatially discrete in the way that organelles are. Cultural entities such as technologies or symbols may consist of repeatable or iterable patterns rather than things and may be conceived as repeatable particular events than objects (Roden 2004). Yet in systems – such as socio-technical networks – whose components cued to recognize and respond to patterns, such entities can exert real causal influence by being repeated into varying contexts.

Importantly for our purposes, dynamical cohesion should not be conflated with functional stability. An entity can retain its dynamical integrity and intrinsic powers while subtending distinct wide functional roles in the systems to which it belongs. To use, Don Ihde’s term: such entities are functionally “multistable”. An Acheulian hand axe – a technology used by humans for over a million years – might have been used as a scraper, a chopper or a projectile weapon.[1] Modern technologies such as mobile phones and computers are, of course, designed to be multistable; though their uses can exceed the specifications of their designers, as when a phone is used as a bomb detonator (Ihde 2012). It seems as if the decomposability of cognitive systems also confers multistability upon their parts thus contributing to the functional autonomy of the system as a whole.

In cognitive science, the classical modularity thesis held that human and animal minds contain encapsulated, fast and dirty, automatic (mandatory) domain-specific cognitive systems dedicated to specialized tasks such as kinship-evaluation, sentence-parsing or classifying life forms. However, it is an empirical question whether the mind is wholly or partly composed of domain-specific cognitive agents and, as Keith Frankish notes, a further empirical question whether neural modularity also holds: that is, whether domain-specified cognitive functions map onto anatomically discrete brain regions in the human brain such as Broca’s area (traditionally associated with language processing) or the so-called “Fusiform Face Area” (Frankish 2012, 280). Neither the classical theory of mental modules nor the neural modularity thesis follows from the fact that human brains are decomposable in the network sense presupposed by assemblage theory.

We should nonetheless expect autonomous entities such as present organisms or hypothetical posthumans to be network-decomposable assemblages rather than systems in which every part is equally coupled with every other part because modularity confers flexibility on known kinds of adaptive system.[2] For example, in biological populations modularity is recognized as one of the necessary conditions of evolvability “an organism’s capacity to generate heritable phenotypic variation.” (Kirschner and Gehart 1998, 8420). Some biologists argue that the transition from prokaryotic cells (whose DNA is not contained in a nucleus) to more complex eukaryotic cells (who have nucleated DNA as well as more specialized subsystems such as organelles) was accompanied by a decoupling of the processes of RNA transcription and subsequent translation into proteins. This may have allowed noncoding (intronic) RNA to assume regulatory roles necessary for producing more complex organisms because the separation of sites allows the intronic RNA to be spliced out of the messenger RNA where it might otherwise disrupt the production of proteins. If, as seems to be the case, regulatory portions of intronic DNA and RNA are necessary for the production of higher organisms, then this articulation in DNA expression may have allowed the ancestor populations of complex multi-cellular organisms to explore gene-regulation possibilities without disabling protein expression (Ruiz-Mirazo, Kepa and Moreno 2012, 39; Mattick 2004).

The benefits of articulation apply at higher levels of organization in living beings for reasons that may hold for autonomous “proto-ex-artefacts” poised for disconnection. Nervous systems need to be “dynamically decoupled” from the environment that they map and represent because perception, learning and memory rely on establishing specialized information channels and long term synaptic connections in the face of changing environmental stimulation. This entails a capacity “for cells to step back from the manifold of ambient stimulus and to be prepared to pick and choose which stimulus to make salient and thus in so doing a capacity to enjoy an unprecedented level of internal autonomy” (Moss 2006 932–934; Ruiz-Mirazo, Kepa and Moreno 2012, 44).[3]

Network decomposition of internal components also seems to carry advantages within control systems, including those that might actuate posthumans one day.  Research into locomotion in insects and arthropods shows that far from using a central control system to co-ordinate all the legs in a body, each leg tends to have its own pattern generator.

 

 

A coherent motion capable of supporting the body emerges from the excitatory and inhibitory actions of the distributed system rather than through co-ordination by a central controller. The evolutionary rationale for distributed control of locomotion can be painted in similar terms to that of the articulation of DNA transcription and expression considered above – a distributed system being far less fragile in the face of evolutionary tinkering than a central control architecture in which the function of each part is heavily dependent on those of other parts.

This rationale plausibly applies to human beings and as well as to our immediate primate ancestors, especially in the case of sophisticate cognitive feats that require the organism to learn specific cultural patterns – such as languages – which would not have been stable or invariant enough to have selected for the component abilities that they require over evolutionary time (Deacon 1997, 322-334  – the Visual Word Form Area is a particularly spectacular example of such “cultural recycling” – see below). While this is compatible with network decomposition it may not tally with the classical modularity thesis since it suggests an evolutionary rationale for the promiscuous re-use of functionally multistable components.

Evidence from functional imaging suggests that anatomically discrete regions like Broca’s or the Fusiform Area are co-opted by evolutionary and cultural processes in support of functionally disparate cognitive tasks. For example, relatively ancient areas in the human brain known to be involved in motor control are also involved in language understanding. This suggests that circuits associated with grasping the affordances and potentialities of objects were recruited over evolutionary time to meet the emerging cultural demands of symbolic communication (Anderson 2007, 14). In a recent target article on neural-reuse in Behavioural and Brain Sciences Michael Anderson cites research suggesting that older brain areas tend to be less domain specific and more multistable – that is, that they tend to get re-deployed in a wider variety of cognitive domains (Anderson 2010, 247). Peter Carruthers and Keith Frankish likewise argue that circuits in the visual and motor areas which have been initially involved in controlling and anticipating actions have become co-opted in the production and monitoring of propositional thinking (beliefs, desires, intentions, etc.) through the production of inner speech. A an explicit belief, for example, can be implemented as a globally available action-representation – an offline “rehearsal” of a verbal utterance – to which distinctive commitments to further action or inference can be undertaken (Carruthers 2008). Andy Clark cites experimental work on Pan troglodytes chimpanzees which comports with the Carruthers’ and Frankish’s assumption that cognitive systems adapted for pattern recognition and motor control can be opportunistically reused to bootstrap an organism’s cognitive abilities. Here, an experimental group of chimps were trained to associate two different plastic tokens with pairs of identical and pairs of different objects respectively. The experimental group were later able to solve a difficult second-order difference categorization task that defeated the control group of chimps who had not been trained to use the tokens:

The more abstract problem (which even we sometimes find initially difficult!) is to categorize pairs-of pairs of objects in terms of higher order sameness or different. Thus the appropriate judgement for pair-of-pairs “shoe/shoe and banana/shoe” is “different” because the relations exhibited within each pair are different. In shoe/shoe the (lower order) relation is “sameness”; in banana/shoe it is difference. Hence the higher-order relation – the relation between the relations – is difference (Clark 2003, 70).

Interestingly, Clark notes that the chimps in the experimental group were able to solve the problem without repeatedly using the physical tokens, suggesting that they were able to associate the “difference” and “sameness” with inner surrogates similar to the offline speech events posited by Carruthers and Frankish (71; See also Wheeler 2004).

This account of the emergence of specialized symbolic thinking and linguistic thinking via the reuse of neural circuits evolved for pattern recognition and motor-control illustrates a more general ontological schema. Assemblages – whether human, inhuman, animate or inanimate – inherit the capacity to couple with larger assemblages from their structure and components and are similarly constrained by those powers. Carbon atoms have the power to assemble complex molecular chains because their four valence electrons permit the formation of multiple chemical bonds. Simpler prokaryotic cells may lack the capacity to evolve the regulatory networks required to form multicellular affiliations because their encoding process is insufficiently differentiated. Likewise, although specific neural circuits may be inherently multistable it does not follow that each can do anything. Each may have specific “biases” or computational powers that reflect its evolutionary origins (Anderson 2010, 247). For example, Stanislas Dehaene and Laurent Cohen review some remarkable results suggesting the existence of a Visual Word Form Area, a culturally universal cortical map situated in the fusiform gyrus of temporal lobe, which is involved in the recognition of discrete and complex written characters independently of writing system.

As Dehaene and Cohen observe, it is not plausible to suppose that the VWFA evolved specifically to meet the demands of literate cultures since writing was invented only 5400 years ago, while only a fraction of humans have been able to read for most of this period (Dehaene and Cohen 2007, 384). Thus it appears that the cortical maps in the VWFA have structural properties which make them ideal for reuse in script recognition despite not having evolved for the representation of written characters (among the factors suggested is that the VWFA is located in a part of the Fusiform area receptive to highly discriminate visual input from the fovea – 389).

Coupling an assemblage with another system – e.g. a transcultural code such as a writing or number system – may, of course, increase the functional autonomy of system by allowing it to respond fluidly and adaptively to the demands of its environment – enlisting new affiliations and resources which, then, come to be functional for it. Literacy and numeracy have become functionally necessary for economic activity in advanced industrial societies – clearly this was not always so! However, this is only possible because both the assemblage and its parts are open to functional shifts that, in effect, allow the creation of new social “megamachines” which extend beyond the coupled individuals. Thus while complex assemblages articulated into lots of functionally open systems may be more functionally autonomous than less articulated ones – are more capable of accruing new functions –they are more apt to be “deterritorialized” by happening on new modes of existence and new ways of being affected (DeLanda 2006, 50-51).

References:

Anderson, Michael (2007). “Massive redeployment, exaptation, and the functional integration of cognitive operations”. Synthese, 159(3): 329-345,

Anderson, M. L. (2010). “Neural reuse: A fundamental organizational principle of the brain.” Behavioral and Brain Sciences, 33(4), 245.

Carruthers, Peter (2008). “An architecture for dual reasoning”. In J. Evans & K. Frankish (eds.), In Two Minds: Dual Processes and Beyond. Oxford University Press.

Clark, Andy (2003). Natural Born Cyborgs. Oxford: Oxford University Press.

Clune, J., Mouret, J. B., & Lipson, H. (2012). “The evolutionary origins of modularity”. arXiv preprint arXiv:1207.2743.

Deacon, Terrence. 1997. The Symbolic Species: The Co-evolution of Language and the Human Brain . London: Penguin.

Dehaene, S., & Cohen, L. (2007). Cultural recycling of cortical maps. Neuron,56(2), 384-398.

DeLanda, M. (2006), A New Philosophy of Society: Assemblage Theory and Social Complexity, London: Continuum.

Frankish, Keith (2012). “Cognitive Capacities, Mental Modules, and Neural Regions”. Philosophy, Psychiatry, and Psychology 18 (4).

Ihde, D. (2012). “Can Continental Philosophy Deal with the New Technologies?” Journal Of Speculative Philosophy, 26(2), 321-332.

Kirschner Marc and Gehart, John (1998). “Evolvability”, Proceedings of the National Academy of Sciences USA, 95, 8420-8427.

Moss, L. (2006). “Redundancy, plasticity, and detachment: The implications of comparative genomics for evolutionary thinking”. Philosophy of Science, 73, 930–946.

Roden, David (2004). ‘Radical Quotation and Real Repetition’, Ratio (new series) XVII 2 June 2004, 191-206.

Ruiz-Mirazo, Kepa & Moreno, Alvaro (2012). “Autonomy in evolution: from minimal to complex life”. Synthese 185 (1):21-52.

Wheeler, M. (2004). “Is language the ultimate artefact?.” Language Sciences, 26(6), 693-715.

 


[1] See Don Ihde, “Embodiment and Multistability”, http://vimeo.com/49101825, accessed 14/02/2013.

[2] One of the benefits of so-called “objected oriented” programming languages (OO) like Java over “procedural” programming languages such as COBOL is that OO programs organize software objects in encapsulated modules. When a client object in the program has to access an object (e.g. a data structure such a list) it sends a message to the object that activates one of the objects “public” methods (e.g. the client might “tell” the object to return an element stored in it, add a new element or carry out an operation on existing elements). However, the client’s message does not specify how the operation is to be performed. This is specified in the code for the object. From the perspective of the client, the object is a black box that can be activated by public messages yielding a consumable output. This means that changes in how the proprietary methods of the object are implemented do not force developers to change the code in other parts of the program since these do not “matter” to the other objects. Maintenance and development of software systems becomes simpler.

[3] The cochlear cells in our inner are connected to hair like cells which are receptive to sound vibrations. This specialized arrangement allows the cochlear to conduct a fast spectrum analysis on incoming vibrations, assaying the relative amplitudes of components in complex sounds.

Posthuman Ecology

On May 7, 2012, in Uncategorized, by enemyin1

Work is being produced at the interface of the bioscience, plastic and visual art, architecture and philosophy that reimagines ecology as a technique of relational technogenesis rather than as the search for a pre-lapsarian presence or balance. Practitioners of this speculative engineering frequently employ Deleuze and Guattari’s notion of the assemblage and Haraway’s figure of the cyborg – “creatures simultaneously animal and machine who populate worlds ambiguously natural and crafted” – as ways of thinking about emergent but decomposable wholes whose parts are not defined by their origin in proprietary regions such as “nature” or “culture”, “the human” or “the nonhuman”.
Rachel Armstrong – protocell engineer, architect and theorist – is one of the leading thinkers and makers in this interdisciplinary field. Her article “The Ecological Human” on the NextNature site offers an appropriately roomy synthesis of emergentist metaphysics, singulatarianism, and slime aesthetics. Armstrong also links to a preview of this evidently great work of Dada-Cyber-Erotica by filmmaker Hans Scheirl. Enjoy.

Tagged with:
 

Reality Chunking

On April 15, 2012, in Uncategorized, by enemyin1

A pre-publication draft of my review of Manuel Delanda’s Philosophy and Simulation: The Emergence of Synthetic Reason

PhilSim_Review_F_WEB.

Tagged with:
 

According to Manuel Delanda an assemblage such as an organism or an economic system is an emergent but decomposable whole. Unlike a totality, an assemblage’s parts can follow independent careers:

Pulling out a live animal’s heart will surely kill it but the heart itself can be implanted into another animal and resume its regular function. (Delanda 2011, 184).

Nonetheless, the emergent properties of a given assemblage W depend ‘on the actual exercise of the capacities of its parts’ (p1, p2… pn).

If this dependency is construed as supervenience then Delanda seems to confront the ‘causal exclusion problem’ for emergent properties by their basal conditions anatomized by Kim. Suppose facts about W’s emergent properties supervene on a basal fact P about p1, p2… pn and that a given emergent property M of W at time t is causally sufficient to bring about another emergent property M* by bringing about a basal condition P* (some state of p1, p2… pn belonging to the supervenience base of M*). Given the upwards dependence constituted by the supervenience relation (the fact that having P suffices for having M but not vice versa) it seems counter-intuitive to claim that the basal condition for M, P (the aforementioned exercise of the micro-capacities), could not cause P* on its own. So responsibility for inter-level causation between emergent properties M, M* can be entirely devolved onto their basal conditions ‘making the emergent property M otiose and dispensable as a cause of P*’ (Kim 2006: 558).

The causal exclusion argument clearly threatens the flat ontological assumption that assemblages have causal autonomy.

There are strategies by which one might hope to de-fang the causal exclusion argument. As Andreas Hüttemann points out supervenience may run symmetrically from properties higher to lower scales as well as from lower to higher (Hüttemann 2004: 71). No change in emergent properties without changes in a given class of basal properties (upwards supervenience) is compatible with no changes in basal properties without changes in a given class of emergent properties (downwards supervenience). If asymmetric supervenience is the assumption motivating causal exclusion then symmetric supervenience undermines a key premise in the argument.[i]

However, it is not clear that Delanda would want to commit himself generally to symmetrical supervenience, if only because he claims that emergent properties can be stable against significant perturbations at the micro-level. Science, he claims, is only possible on the condition that we can ‘chunk’ stabilities at a given level without modeling all the way down (2011: 14). Folk psychology does not require folk neuroscience; knowledge of classical genetics does not require molecular genetics.

A more congenial avoidance strategy could be furnished by an account of how wholes exercise ‘top down’ influence on the manifestation of capacities. While Delanda has not, to my knowledge, discussed supervenience, he is explicitly committed to the existence of top-down as well as bottom-up causality – a position he explicates in terms of the distinction between properties and capacities (See, for example, Delanda 2010b, 68-70). The properties of a thing are necessarily actualized but the actualization of capacities is context-sensitive (Delanda 2011: 4). Delanda regards the actualization of capacities not as a ‘state’ but an event or interaction since it includes the affecting of one thing and the being-affected of another (Ibid; see also Delanda 2010a: 385).

For example, Chapter Eleven of Philosophy and Simulation considers the problem space for the emergence of archaic states from simpler chiefdoms in which wealth and status differences disseminated more fluidly. One explanation for the more stratified forms found in complex cheifdoms or proto-states is that the relaxation of incest prohibitions on marrying close relatives would have allowed persistent concentrations of wealth and status – an explanation with prima facie support from multi-agent computer simulation (Delanda 2011: 172). So while a given accretion of agricultural wealth, say, has capacities for distributions between lineages or within lineages, there are critical parameters determining which is actualized.

Chapter Seven ‘Neural Nets and Mammalian’ memory considers the conditions for the emergence of the capacity for episodic memory on the basis of simpler networks that lacked the capacity to present processes or histories. The capacity for nervous systems to synthesize the manifold of successive waves of stimuli can be modelled using so-called ‘recurrent neural networks’ which add feedback from their hidden layers to the input to be received on the next round of stimulation. The result is that net can be trained to recognize temporal patterns such as bounding balls or chord sequences (Ibid. 103-4). Thus any given layer of neurons has the capacity to represent temporal regularities, but can manifest it only if ‘plugged’ into a network with the right topology.

So parameterized constraints (like incest prohibitions) or structural properties (like network topology) can, it seems, downwardly activate manifestations of component-capacities, explaining the dependence of component behaviour on the assemblages to which they belong. Is this enough to grant ontological autonomy?

Well, this is surely a debatable point. ‘Downward influence’ of this kind is also exhibited in very simple cellular automata like John Conway’s game of life as much as in real systems (Delanda 2011: Chapter Two). The Game of Life is a two dimensional array of cells, each of which can be ‘Alive’ (On) or ‘Dead’ (Off) at a given time step. The states of the cells are determined by three simple rules:

1) A dead cell with exactly three live neighbors becomes alive on the next time step.

2) A live cell with two or three live neighbors stays alive.

3) In all other cases a cell dies or remains dead.

These rules pass for fundamental physics in the Life World. Yet computer simulations show that patterns exhibiting complex, often unpredictable dynamical regularities can ‘emerge’ from these though all realized in arrangements determined by the three rules (Bedau 1997).  In all cases these involve higher-level structures constraining the capacities of individual cells, yet it is not clear that these should be treated as ontologically distinct from mere aggregates. Nor is it obvious that true ontological novelty can occur in a world where every thing is an aggregation of recurrent constituents determined by invariant rules.

 

References

Delanda, M. (2004), Intensive Science and Virtual Philosophy. London: Continuum.

Delanda, M. (2006), A New Philosophy of Society: Assemblage Theory and Social Complexity. London: Continuum.

Delanda, M (2010),’Emergence, Causality and Realism’, in Levi Bryant, Nick Srnicek and Graham Harman (eds) The Speculative Turn: Continental Materialism and Realism (Melbourne: Re.press).

Delanda, Manuel (2011), Philosophy and Simulation: The Emergence of Synthetic Reason, London: Continuum, 226 pp.

Hüttemann, Andreas (2004), What’s Wrong with Microphysicalism? London: Routledge.

Kim, Jagewon (2006), ‘Emergence: Core Ideas and Issues’, Synthese 151(3), pp. 547-559.

 

 

 

 

 

 

[i] There may, of course, be ways of casting the causal exclusion argument that do not hinge of asymmetric supervenience.

Tagged with:
 

Remarks on Delanda on the Virtual

On July 1, 2011, in Uncategorized, by enemyin1

 

Gilles Deleuze famously replaces the traditional modal distinction between actuality and possibility with a distinction between actuality and virtuality. This is introduced to block the claim that reality can be fully or adequately represented. If the actual is just the instantiation of the possible, then an actual thing resembles the thing as conceived or as represented in every respect other than with regard to existence or actuality (Deleuze 1994, p. 211). Whereas the actual thing qualitatively ‘resembles’ the possible thing (like the subject of a hyper-refined bit-map) the virtual is the part of the thing that corresponds to its tendencies. Tendencies are not (so the story goes) the actualization of possible states of the thing but the processes by which the thing self-differentiates. As Daniel Smith points out, this not only short-circuits representation at a fundamental level, but allows for the production of deep metaphysical newness:

Deleuze will substitute for the possible-real opposition what he calls virtual-actual complementarity: the virtual is constituted through and through by difference (and not identity); and when it is actualised, it therefore differs from itself, such that every process of actualisation is, by its very nature, the production of the new, that is, the production of a new difference (Smith 2007, p. 6).

It’s not clear to me whether this entails global anti-representationalism or whether this position is compatible with the view that that the actual (but not the virtual) is representable in some fashion. The latter position seems implicit in those passages where Deleuze equates the actual with the phenomenal (experienceable) and the virtual with those noumenal motors (‘the noumenon closest to the phenomenon’) that bring extensive differences and objects into our phenomenal purview (Ibid., p. 222). Moreover, global anti-representationalism seems to run up against the obvious objection that that an entirely unrepresentable world would be intractable and unknowable, whereas our world seems tractable and knowable in part.

Be this as it may, this does seem to entail that we cannot represent a Deleuzean becoming (a virtuality) as the realization of some possible state of a thing even if (assuming that global anti-representationalism is rejected) we can represent successive actualizations in this way.  If this is right, then it raises some interesting questions about the way concepts drawn from the mathematics of dynamical systems have been employed by contemporary Deleuzeans: Manuel de Delanda being the most prominent figure here.

Dynamical systems theory (DST) is all about trajectories in mathematical spaces the points of which describe the possible states of a system (a state space). To quote Robert Devaney it asks ‘where do points go and what do they do when they get there’ (Devaney 1986, 17). Where a differential function f’ describing a dynamical system can be solved it is possible to show how its integral f generates a trajectory with respect to its variables. Where these cannot be solved (which, mathematicians inform us, is true in the majority of cases) it may still be possible to give a qualitative account of the tendencies of the system. Thus the differential equations of a system that describe how its rates of change alter can tell us about attracting sets (attractors) towards which its orbits (trajectories through state space) will approach asymptotically  (that is, orbits tend to approach these sets by successive iterations without ever arriving in them).

Now, Delanda has used this geometrical conception of an attractor or ‘singularity’ to explicate Deleuze’s conception of the virtual and to explain why the virtual/actual distinction is metaphysically preferable to the possible/actual distinction. We can, for sure, represent the possible states of a system by a state space. For example, the state space of a 3 layer neural net with 8 inputs inputs + a 4 neuron hidden layer + a 2 unit output layer can be represented in a space of 8 + 4 + 2 = 14 dimensions. Any possible behaviour of this network can be thought of as a point in this 14 dimensional space. If the net can be ‘trained up’ in some discrimination task – like distinguishing round from jagged shapes – the singularities will be points within those partitions of the 4 neuron subspace towards which patterns evoked by ‘jaggedish’ or ’roundish’ stimuli on the input layer will tend to converge.

So far, we have not had recourse to the virtual/actual distinction to describe the behaviour of this system. To be sure, we’ve talked loosely in terms of tendencies: e.g. as stimuli at the input become increasingly jagged the state of the hidden layer in the trained network should tend to approach the prototype ‘jaggedness’ state. But this is really just another way of describing the system’s dispositions – specifying how it would perform given certain kinds of input. So why is DST supposed to help in understanding the virtual? Delanda thinks that the asymptotic nature of singularities is key here. While singularities can be said to specify the behaviour of a system, they do so in terms of states that the system could never ‘actually’ assume:

A clue to the modal status of these invariants is the fact that, as is well known, trajectories in phase space always approach an attractor asymptotically , that is, they approach it indefinitely close but never reach it . Although the sphere of influence of an attractor, it’s basin of attraction, is a subset of points of phase space, and therefore a set of possible states, the attractor itself is not a possible state since it can never become actual (Delanda 2010, 149)

Thus a singularity represents the tendencies of a system but not one of its possible states:

In other words, unlike trajectories representing possible histories that may or may not be actualized, attractors can never be actualized since no point of a trajectory can ever reach them. Despite their lack of actuality attractors are nevertheless real since they have definite effects. In particular, they confer on trajectories a strong form of stability, called “asymptotic stability” ... It is in this sense that singularities represent only the long term tendencies of a system but never a possible state. Thus, it seems, that we need a new form of physical modality, distinct from possibility and necessity, to account for this double status of singularities: real in their effects but incapable of ever being actual. This is what the notion of virtuality is supposed to achieve (Ibid., 150).

If this account works, then it appears we can unpack Deleuze’s conception of the virtual without the highly speculative metaphysics used in Smith’s gloss above. But can we do this satisfactorily? My worry here is while an attractor may not lie on an orbit within the dynamical system itself, it does belong to its state space. Moreover, its status qua singularity depends on those features of the system which determine the possible trajectories of the orbits. For example, if a singularity is a single point attractor s and the orbits are defined by a mapping of a point F(p), then successive iterations of F (p), F(F(p)), etc.  will approach s as the number of repetitions approaches infinity. So this is a property which can be defined in terms of the actual properties of a set: namely the region or ‘basin of attraction’ within which every iteration is a subset of the set generated by previous iterations. The properties which define the singularity thus seem to be structural. They may be very exotic (as we are told is the case with ‘strange’ or chaotic attractors) but their specification does not seem to require any new logical concepts – certainly, no new modal concepts. Maybe I’m missing something vital – I can’t claim a confident grasp of the mathematics of dynamical systems – so I’ll leave it to those better qualified than myself to correct any misunderstandings here.

References:

Deleuze, G. (1994), Difference and Repetition, Paul Patton (trans.). London: Athlone Press.

Devaney, Robert L.(1986). An Introduction to Chaotic Dynamical Systems, Menlo Park, Ca.:Benjamin Cummings.

Delanda, Manuel (2010). Deleuze: History and Science, Atropos Press.

Smith, Daniel (2007), ‘The Condition of the New’, Deleuze Studies, Vol 1, pp. 1-21.

 

Tagged with:
 

Excision Ethos

On February 1, 2011, in Uncategorized, by enemyin1

A flat ontology would allow emergent discontinuities between the human and non-human. Here we understand radical differences between humans and non-humans as emergent relations of continuity or discontinuity between populations, or other such particulars, rather than kinds or abstract universals.[1][2]

The most widely accepted definition of emergence holds that an emergent phenomenon P cannot be predicted from from its initial conditions (e.g. existence and microdynamics of precursor populations) short of running a simulation with relevantly similar properties (Bedau 1997, 378). Thus a genuinely predictive simulation of the emergence of some posthuman entity – such a prospective AI or AI+ – would have to be apt to generate the same kinds of differences. Any simulation of an emergent phenomenon is. in this sense, an emergent phenomenon given that it must have structurally similar properties. So there can be no simulation for posthuman emergence short of posthuman emergence itself.

It seems, then, that the epistemological distinction between a singularity and its simulacrum evaporates in perfect Baudrillardian equivalence .

I take it that cyborg or assemblage ontology is also fully compatible with flat theories of difference along these lines. If so, the cyborg ontology which arguably underlies speculative posthumanism (SP) and transhumanism (H+) can be characterized by what I refer to as the double logic of ‘deconstruction’ and ‘excision’.

Cyborg/assemblage ontology deconstructs claims to transcendent and transcendental unity or totality in ways that are now relatively familiar. However, the dynamics of such entities furnishes the basis for an excision – in Deleuzean terms, a line of flight – by which individuals or collectives ‘become other’, diverging in historically actualized ways from parent entities.

Excision is not transcendence in a traditional theological or metaphysical sense. The idea of the posthuman is not the dialectical idea some entity that transcends a specifiable cognitive boundary.

Kant’s noumenon or the God of negative theology are more intellectually domesticated than this, for we know them, at least, in terms of what they are not. In contrast, we cannot know the relations of the posthuman to the human prior to the historical emergence of the posthuman. Nor, given a flat theory of difference, can we deconstruct its possibility on familiar anti-essentialist grounds We can only preclude an a priori conception of what that possibility entails.

We cannot preclude an a posteriori account of posthuman difference. Posthumans will presumably understand themselves in their own way. Otherwise their nature will have to be studied empirically by other kinds of being – perhaps by institutions resembling the ‘theological observatories’ dotting the ‘transcend’ – the computational extremum of a far-future galaxy in Virnor Vinge’s  A Fire Upon the Deep!

Thus applying a flat ontological analysis to the posthuman implies that it is the technological excision of the human. The position is consistent, then, with an ‘anthropological humanism’ – insofar as it holds that there are real discontinuities in the world – but not with any ‘transcendental humanism’ (to cite a very useful distinction Derrida makes in the ‘The Ends of Man’).[3]

I am not claiming that the posthuman is some ‘empty’ signifier. There are many recent and contemporary precedents in art, philosophy and literature to prospectively excise the human. Vinge’s original essay on Technological Singularity sets out an imaginary of recursive ontological violence, without discernible limit. Fictions such as Bruce Sterling Schismatrix or J. G Ballard’s Crash enjoin cyborgian transits whose only justification is their formal expressibility.  In plastic and performance art, Stelarc’s speculative coupling  with technological assemblages like industrial robots or prosthetic ‘third’ arms provide compelling intimations of our obsolescence. The musical assemblages of Varese, Cage and Xenakis have metastasized new organs of hearing. In philosophy, likewise, we might mention cultural exemplars of excision in the work of French anti-humanists like Foucault and Deleuze or eliminative materialists such as Paul and Patricia Churchland.[4]

So the term ‘posthuman’ is not semantically void. It is ideationally multivalent, precipitate and unwise – but epistemically null.  In order to acquire knowledge of the posthuman we would – according to the logic of simulation – have to make ourselves, or some of our ‘wide’ descendants, posthuman.

This apparent recalcitrance to prediction or precedent raises an awkward issue for those with an intellectual, aesthetic or political interest in the posthuman. Why should we be interested in a transformed condition which cannot yet be identified?

The idea of a technological excision is of intellectual concern because of direct practical interest.

The human population is now part of a self-augmenting planetary technical system over which we can have little control. Democratizing technology merely ramps up its unpredictability – there can be no exercise of General Will without the relinquishment of technical modernity itself. So this ‘second nature’ is an emergent causal power in its own right, not an ideological expression of an alienated social form.

The fact that technological systems are out of control doesn’t mean that they, or anything else, is in control. There need be no finality to the system at all: technical self-augmentation does not imply technical autonomy. So the assumption that we belong to a self-augmenting technical system (SATS) should not be confused of ‘normative technological determinism’ that we find, say, in the work of Heidegger and Jacques Ellul.

As cyborg assemblages whose actions are extended or amplified by this planetary system, our drives, desires and fixations are also modulated by it. To programme computers I must learn the syntax and semantics of a relevant language, but also internalize the aesthetic standards entailed by some or other approach to software design. I buy junk I can learn to like.

By the same token a desire for technological excision is an iteration of a disruptive self-remaking, expressed in technically constituted beings or macro-assemblages. If the interest in our posthuman prospects expresses a self-excising drive, however, it implies an ethic of technological self-fashioning distinct from the blandly instrumental ethics of H+. The latter, is constituted by public ethical standards. The desire to excise the human, however, cannot be a public ethical standard.

This is not because it is too horrible to be expressed. It has no expression other than a speculative engagement with technique: ontological engineering.

The only reason for the principled unintelligibility of the posthuman is its dated non-existence. Thus if we are engaged in excision we also aim or hope to understand what we are getting ourselves into one day.

References

Bedau, Mark (1997), ‘Weak Emergence’, Philosophical Perspectives, 11, Mind, Causation, and World, pp. 375-399.

Delanda, M. (2004), Intensive Science and Virtual Philosophy. London: Continuum.

Derrida, J. (1986), ‘The Ends of Man’, in Margins of Philosophy, Alan Bass (trans.). Brighton: Harvester Press, 109-136.

Roden, David. 2010. Deconstruction and excision in philosophical posthumanism. Journal of Evolution and Technology 21(1) (June): 27-36.

Notes:

[1] Flat ontologies are opposed to hierarchical ontologies in which the structure and evolution of reality is explained by transcendent or transcendental organizing principles: essences, kinds, organizing categories or natural states, to name a few (Delanda 2004, p. 58). I should qualify this ontology, further as a regional ontology rather than a fundamental ontology. It may not be possible to eschew essences or organizing structures tout court. However, while essentialism may be defensible in areas like microphysics or the chemistry of the periodic table, for example, it seems far less persuasive as ontology of complex systems such as mind/brains or cyborg-assemblages. There may be, for example, basic physical laws which are akin to essences. Elementary particles like electrons may legitimately be claimed to have their charge and rest-mass essentially and even chemical elements like gold may have their atomic numbers necessarily.

[2] If we make the artificially simple assumption that humans are members of the biological species homo sapiens, then to be biologically ‘human’ is not to exemplify an eidos consisting of necessary characteristics such as ‘rationality’ and ‘animality’, but to be a part of a larger more geographically extensive and temporally continuous population (Delanda 2004, p. 57).

[3] Thoroughgoing anti-humanists might, at this point, prefer to erase ‘human’ and speak indexically of a transformed ‘us’.

[4] In Scientific Realism and the Plasticity of Mind Paul Churchland memorably describes a group of future people whose common sense conception of reality is informed by modern physical theory: ‘These …’ he writes ‘do not sit on the beach and listen to the steady roar of the pounding surf. They sit on the beach and listen to the aperiodic atmospheric compression waves produced as the coherent energy of the ocean waves is audibly redistributed in the chaotic turbulence of the shallows’ (SRPM, 29).

Flat Ontology II: a worry about emergence

On January 18, 2011, in Uncategorized, by enemyin1


Summary: if you want to distinguish assemblages from aggregates in a flat ontology you need a metaphysics of emergence. But real emergence may not work unless we deny that parts of assemblages are separate from the whole. This seems to undermine the point of assemblages where, it is said, the parts are logically exterior from one another and can play elsewhere.

The idea of a flat ontology was taken over by Manuel Delanda from Gilles Deleuze. As Levi-Bryant notes in the Speculative Turn, it derives from the Deleuzean thesis of the univocity of being: viz that Being is always predicated of entities in the same sense (Bryant 2010, 269). A flat ontology is one in which no entity is ontologically more fundamental than anything else. Otherwise put, flat ontologies can be opposed to hierarchical ontologies:

[While] an ontology based on relations between general types and particular instances is hierarchical, each level representing a different ontological category (organism, species, genera), an approach in terms of interacting parts and emergent wholes leads to a flat ontology, one made exclusively of unique, singular individuals, differing in spatio-temporal scale but not in ontological status (DeLanda 2004, p. 58).

In a flat ontology the powers and dispositions of an entity are explained with reference to interactions between the particulars that compose or otherwise relate to it. It is never the result of entities of one kind being pushed around by a privileged being like a god, a transcendental subject, a natural state or its associated species essences (Sober 1980).

However, the behaviour of complex entities like organisms, people or societies must be more than the sum of their micro-interactions of these are to be genuine presences in the world and not accountancy tools for tracking the aggregate behaviour of their components. The macro-level properties of complex beings must thus be emergent from and not merely resultants of these interactions. Unless a concept of emergence can explain how complexes derive their powers from their parts without being reducible to their aggregate behaviour, it is of little value to a flat ontology. Similarly, as Graham Harman emphasizes in his commentary on Delanda, a flat ontology recognizes no ontological primacy of natural over so-called artificial kinds. Both kinds of kind should be viewed as having equal ontological weight to throw around (Harman 2008, 372).

The most widely accepted definition of emergence states that an emergent phenomenon cannot be predicted from its initial conditions (e.g. existence and microdynamics of precursor populations) short of running a simulation with relevantly similar properties (Bedau 1997, 378). For jobbing scientists, this definition of what is sometimes called ‘weak emergence’ usefully dodges philosophical issues about spooky emergent properties or downward causation. However, while unexceptionable and useful, the concept of weak emergence describes emergent behaviour as a function of our epistemic capacities and seems unable to account for genuine ontological novelty.

In Delanda’s work the requirement that the world contain non-derivative kinds or wholes is expressed as the distinction between ‘assemblages’ and ‘aggregates’. Delanda contrasts assemblages both with aggregates and with the synthetic wholes or ‘totalities’ postulated by idealist philosophers.

In a totality each part is constituted by logically necessary (interior) relations to the others parts. The Kantian object, for example, is constituted by the transcendental conditions of possible experience (space, time, the categories). The Hegelian master is constituted, as master, by his relationship to the servant.

An assemblage, on the other hand, is characterized by ‘relations of exteriority’: any part can be detached to ‘play’ elsewhere; though some parts (vital organs) may have a ‘contingently obligatory’ relation to the assemblage insofar as they or a functional equivalent may be required for its continued existence. A pig’s heart valve may have a contingently obligatory relation to a living pig, but this does not impugn its ontological independence; it does not, for example, prevent it being ‘xenotransplanted’ into the heart of a human donee (Delanda 2006, 11-12).

The properties of aggregates are resultants of the independent behaviours or properties of their parts. However, for Delanda, the macro-level properties actualized by assemblages depend also on interrelations between their parts and not only upon intrinsic properties and micro-behaviours:

The surface of a pond or lake may not afford a large animal a walking medium, but it does to a small insect which can walk on it because it is not heavy enough to break through the surface tension of the water. (Delanda 2004, 73; Delanda 2006, 11).

Delanda’s contribution to the Speculative Turn – ‘Causality, Emergence and Realism’ – clarifies, somewhat, the relationship between actualized or potential capacities and actualized properties:

Sharpness is an objective property of knives, a property that is always actual: at any given point in time the knife is either sharp or it is not. But the causal capacity of the knife to cut is not necessarily actual if the knife is not currently being used. In fact, the capacity to cut may never be actual if the knife is never used. And when that capacity is actualized it is always as a double event: to cut-to be cut. In other words, when a knife exercises its capacity to cut it is by interacting with a different entity that has the capacity to be cut (Delanda 2010, 285).

Aggregates are wholes whose global behaviour is the resultant of the individual behaviours of their parts and, for this reason, can often be predicted using the techniques of linear mathematics (as in the Fourier analysis of a periodic waveform).

However, in those systems Delanda calls ‘assemblages’ the higher-level behaviour will often not be deducible from the micro-behaviour of constituents even where, as in simple computer simulations like John Conway’s ‘game of life’, the dynamics of the components can be stated simply and exhaustively (Bedau 1997, 379-386). This is because higher-scale properties of assemblages catalyse and constrain the accessible behaviours of its constituents in ways that can’t be deduced from knowledge of their behaviour in isolation. A neuronal unit within an artificial neural network has the capacity to retain information about its past behaviour, but may only manifest this capacity in a recurrent network whose feedback modulates its input with its time-delayed output. The process enabled by the recurrent network determines the dynamics of the parts, though it also depends on the parts having functional properties relevant to neuronal behaviour. The emergent properties of the recurrent network depend on the relational properties of the whole network which allow the neuron to manifest a capacity for encoding a rudimentary history that it would not have manifested in a feedforward network.

Assemblages, for Delanda, thus exhibit macro-level emergent properties in virtue of their higher-level organization and structure. For example, higher-scale processes can shield components from chaotic influences that would disrupt coherent patterns or the sharing of information. This seems to be one of the mechanisms at work in Raleigh-Benard convection – a non-equilibrium system in which layers of fluid develop ordered convection cells when the temperature difference between the bottom and top of the layer reaches a critical value.  According to Robert Bishop’s reconstruction of the mechanisms of emergence in Raleigh-Benard convection one of the ordering mechanisms here is a ‘shear flow’ caused by a critical Temperature gradient. The shear flow reduces the number of states accessible for an eddy within the fluid by causing ‘adjacent’ eddies to move in a common phase. This is equivalent to a reduction of thermodynamic entropy or increase in the correlatedness between events within the fluid (Bishop 2008, 239).

Bishop and other theorists of emergence like Jeffrey Goldstein argue that genuine ontological emergence occurs where the identities of components or levels are ‘confounded’ or ‘tangled’. One could object, like the emergence-skeptic Jagewon Kim, that if the parts of an assemblage are ontologically distinct then its ‘macro-level’ properties would be possessed in virtue of the properties and arrangements of its parts (the supervenience of macro-level properties on micro-level properties – Bishop 2008, 242-3). Arranging each part of an assemblage in a specific relation to its neighbours – as in the units of an artificial neural network – would suffice for the macro-level structures that generate the ‘surprising’ emergent behaviour exhibited in recurrent ANN’s. There would be no ‘downward’ causal influence from the assemblage itself since the micro-arrangement would already have fixed all the macro-level properties.

Otherwise put, if there is a supervenience of emergent macro-properties on micro-properties, then what appears as a top-down influencing of parts by the entity they compose is just an impressive bit of micro-puppetry. It follows that there would be no principled ontological distinction to made between assemblages and aggregates. There may be marked empirical differences of course, since what Delanda calls assemblages involve nonlinear interactions (whose equations of motion cannot be expressed as a linear sum of independent components) and these often lead to what Bedau calls weak emergence. But weak emergence is not enough to account for the ontological democracy of a flat ontology. It may be possible to buy real ontological emergence, of course, if we are prepared to accept that the parts of assemblages are not parts in the way that parts of aggregates are – that they are not distinct objects with independent properties or powers. But on this picture, assemblages begin to look spookily like good old bad old totalities!

Bibliography

Bryant, Levi (2010), in Levi Bryant, Nick Srnicek and Graham Harman (eds) The Speculative Turn: Continental Materialism and Realism (Melbourne: Re.press).

Bedau, Mark (1997), ‘Weak Emergence’, Philosophical Perspectives, 11, Mind, Causation, and World, pp. 375-399.

Bishop, Robert C. (2008), ‘Downward Causation in Fluid Convection’, Synthese 160: pp. 229-248.

Delanda, M. (2009), Intensive Science and Virtual Philosophy. London: Continuum.

Delanda, M. (2006), A New Philosophy of Society: Assemblage Theory and Social Complexity. London: Continuum.

Delanda, M (2010),’Emergence, Causality and Realism’, in Levi Bryant, Nick Srnicek and Graham Harman (eds) The Speculative Turn: Continental Materialism and Realism (Melbourne: Re.press).

Deleuze, G. (1994), Difference and Repetition, Paul Patton (trans.). London: Athlone Press.

Harman, Graham (2008), ‘Delanda’s Ontology: assemblage and realism’, Continental Philosophy Review 41, 367-383.

Sober, Elliot (1980) ‘Evolution, Population Thinking and Essentialism’, Philosophy of Science 47(3), pp. 350-383.

Thoughts on Flat Ontology

On September 15, 2010, in Uncategorized, by enemyin1

The term ‘flat ontology’ was coined by Manuel DeLanda in his book Intensive Science and Virtual Philosophy. Flat ontologies are opposed there to hierarchical ontologies in which the structure and evolution of reality is explained by transcendent organizing principles such as essences,  organizing categories or natural states:

[While] an ontology based on relations between general types and particular instances is hierarchical, each level representing a different ontological category (organism, species, genera), an approach in terms of interacting parts and emergent wholes leads to a flat ontology, one made exclusively of unique, singular individuals, differing in spatio-temporal scale but not in ontological status (DeLanda 2004, p. 58).

In a flat ontology the organization of entities is explained with reference to interactions between particular, historically locatable entities. It is never the result of entities of one ontological kind being related to an utterly different order of being like a God, a transcendental subject, a natural state or its associated species essences (Sober 1980). For flat ontologies, the factors which motivate macro-level change are always emergent from and ‘immanent’ to the systems in which the change occurs.

DeLanda’s characterization of flat ontology comes during a discussion of the ontological status of species in which he sides with philosophers of biology like David Hull and Elliot Sober who hold that species are differentiated populations that emerge from variations among organisms and the evolutionary feedback processes these drive (DeLanda 2004, 60). For DeLanda, evolutionary feedback instances a universal tendency for identifiable things and their properties to emerge from intensive  or (or productive) differences such as variations in heritable adaptive differences or chemical concentrations (Ibid., 58-9; 70). Thus the formation of soap bubbles depends on the tendency of component molecules to assume a lower a state of free energy, minimizing inter-molecular distances and cancelling the forces exerted on individual molecules by their neighbors (Ibid., 15). The process instantiates an abstract tendency for near-equilibrium systems with free energy to ‘roll down’ to a macrostate attractor. Thus for DeLanda’s ontology (following Deleuze) individuals are not products of the operations of a Kantian/Husserlian transcendental subject but of the cancellation of intensive differences and the generative processes they drive. These processes are governed by mathematical structures – e.g. ‘virtual’ attractors or ‘singularities’ – which are ‘quasi-causal’  influences on their trajectory through a particular state space (Ibid., 14).

How do we reconcile this second ontological claim (which I will refer to as ‘transcendental materialism’) with an adherence to a flat ontology of individuals. Is ontological flatness merely a regional principle applying to the ‘bits’ of the universe where differentiated particulars have already emerged from intensive processes, rendering their generative mechanisms irrelevant to understanding or categorizing the entities they have become? Moreover, if these processes are explained in terms of the virtual structures they exhibit, such as their singularities, doesn’t TM just reintroduce an ontological hierarchy between particular and universal?*

Graham Harman argues that the quasi-causal role of the abstract or virtual in DeLanda’s thought vitiates its commitment to a flat ontology for which “atoms have no more reality than grain markets or sports franchises” (Harman 2008, 370). Thus while depriving species and kinds of any distinctive organizing role, DeLanda inflates the role of the ‘genus’ in the form of virtual patterns (such as the relationship between the topology of systems and their capacity for autocatalysis explored of Stuart Kauffman and others). Secondly, subordinating individuals to their historical generative processes is seen by Harman as a way of ‘undermining’ the status of the particular or individual, which – against the letter of flat ontology – is somehow less real or effective than the intensive processes that produce it.

I think Harman does contemporary philosophers a favour by anatomizing these tensions within DeLanda’s materialism. However, it is far from clear to me that the regulative ideal of ontological flatness necessitates an ontology in which deep individuals and their (largely non-manifest) capacities play the central organizing role. It may be that the generative histories of particulars are relevant only insofar as they leave “lasting fingerprints” on the particulars they generate, making DeLanda’s proposal that we categorize particulars by way of the generative processes that produce them potentially problematic in some cases (Ibid.,374; DeLanda 2004, 50). However, if DeLanda’s (and Deleuze’s) transcendental materialism is correct, then any entity generated as a result of these processes will always be – as Iain Grant emphasizes – a fragile achievement, fatally involved in the play of further intensities (for example, at certain temperature thresholds, the lipid layers dividing biological cells from their watery milieu will simply melt, their ‘cohesion’ as individuals breaks down). The question of typing by generative process is thus an empirical matter of the causal relevance of such processes to the maintenance of individuals at all scales.

There  is no reason why flat ontologies have to be individualist or object-oriented. The  concept of the ‘individual’ and the wider category of the ‘particular’ are often conflated. The latter category may contain events, ‘diffusions’ or collectives: each of which may be insufficiently differentiated to qualify for objecthood (Roden 2004, p. 204). The cancellation of intensive quantities can certainly be accommodated within the category of particular events without threatening flatness (whether this is an orthodox Deleuzean solution doesn’t concern me). Secondly, insofar as the virtual laws of form which DeLanda describes reflect the mathematical structure of morphogenetic processes or systems, then their ontological autonomy need not violate the autonomy of the particular. Rather, morphogenetic structures reflect substrate neutral or formal constraints on the behavior of material systems whose effects are entirely produced by those systems. Quasi-causes do not preempt causes proper but reflect structural similarities between systems with otherwise distinct components.

For example, Stuart Kaufmann has used computer simulation of so called ‘NK Boolean Netoworks’ to argue that the capacity of systems of mutually interacting parts to generate stable auto-catalytic cycles is sensitive to the number of inter-connections between those parts. If the number of connections is large (that is, if the number of connections K to a given component approximates to the number of components N) the system behaves in a random, disordered way. However, for smaller values of K (e.g. K=2) the system settles down to exploring a relatively small number of ‘attractor’ sequences. Kaufmann speculates that this relationship is substrate-neutral - independent of  nature of the system components (they could be nodes in an NK boolean  simulation or chemical substances in a solution).

So a provisional conclusion, here, is that we can retain the role of structural ‘quasi-causes’ and reject the primacy of individuals without compromising the regulative ideal of ontological flatness.

DeLanda, Manuel. (2004), Intensive Science and Virtual Philosophy. London: Continuum.

___(2006), A New Philosophy of Society. London: Continuum.

Harman, Graham (2008), ‘Delanda’s Ontology: assemblage and realism’, Continental Philosophy Review 41, 367-383.

Roden, David. (2004), ‘Radical Quotation and Real Repetition’, Ratio: An international journal of analytic philosophy, XVII/2 (2004), pp. 191–206.

Sober, Elliot (1980) ‘Evolution, Population Thinking and Essentialism’, Philosophy of Science 47(3), pp. 350-383.

*We could also ask: is the cancellation of intensive difference merely a regional principle applying to various kinds of thermodynamic systems rather than, say, to more fundamental physical entities or structures?