A highly illuminating discussion of the place of value, meaning and purpose within a naturalistic worldview. H/p synthetic zero

Braidotti’s Vital Posthumanism

On October 21, 2013, in Uncategorized, by enemyin1

Bt-toxin-crystalsCritical Posthumanists argue that the idea of a universal human nature has lost its capacity to support our moral and epistemological commitments. The sources of this loss of foundational status are multiple according to writers like Donna Haraway, Katherine Hayles (1999), Neil Badmington (2003), Claire Colebrook and Rosi Braidotti. They include post-Darwinian naturalizations of life and mind that theoretically level differences between living and machinic systems and the more intimate ways of enmeshing living entities in systems of control and exploitation that flow from the new life and cognitive sciences. Latterly, writers such as Braidotti and Colebrook have argued that a politics oriented purely towards the rights and welfare of humans is incapable of addressing issues such as climate change or ecological depletion in the anthropocene era in which humans “have become a geological force capable of affecting all life on this planet” (Braidotti 2013: 66).

On the surface, this seems like a hyperbolic claim. If current global problems are a consequence of human regulation or mismanagement, then their solution will surely require human political and technological agency and institutions.

But let’s just assume that there is something to the critical posthumanist’s deconstruction of the human subject and that, in consequence, we can no longer assume that the welfare and agency of human subjects should be the exclusive goal of politics. If this is right, then critical posthumanism needs to do more than pick over the vanishing traces of the human in philosophy, literature and art. It requires an ethics that is capable of formulating the options open to some appropriately capacious political constituency in our supposedly post-anthropocentric age.

Braidotti’s recent work The Posthuman is an attempt to formulate such an ethics. Braidotti acknowledges and accepts the levelling of the status of human subjectivity implied by developments in cognitive science and biology and the “analytic posthumanism” that falls out of this new ontological vision. However, she is impatient with what she perceives as a disabling vacillation and neutrality that easily follows from junking of human subject as the arbiter of the right and the good. She argues that a posthuman ethics and politics need to retain the idea of political subjectivity; an agency capable of constructing new forms of ethical community and experimenting with new modes of being:

In my view, a focus on subjectivity is necessary because this notion enables us to string together issues that are currently scattered across a number of domains. For instance, issues such as norms and values, forms of community bonding and social belonging as well as questions of political governance both assume and require a notion of the subject.

However, according to Braidotti, this is no longer the classical self-legislating subject of Kantian humanism. It is vital, polyvalent connection-maker constituted “in and by multiplicity” – by “multiple belongings”:

The relational capacity of the posthuman subject is not confined within our species, but it includes all non-anthropocentric elements. Living matter – including the flesh – intelligent and self-organizing but it is precisely because it is not disconnected from the rest of organic life.

‘Life’, far from being codified as the exclusive property or unalienable right of one species, the human, over all others or of being sacralised as a pre-established given, is posited as process, interactive and open ended. This vitalist approach to living matter displaces the boundary between the portion of life – both organic and discursive – that has traditionally been reserved for anthropos, that is to say bios, and the wider scope of animal and nonhuman life also known as zoe (Braidotti 2012: 60).

Thus posthuman subjectivity, for Braidotti, is not human but a tendency inherent in human and nonhuman living systems alike to affiliate with other living systems to form new functional assemblages. Clearly, not everything has the capacity to perform every function. Nonetheless, living systems can be co-opted by other systems for functions “God” never intended and Mother Nature never designed them for. As Haraway put it:  ‘No objects, spaces, or bodies are sacred in themselves; any component can be interfaced with any other if the proper standard, the proper code, can be constructed for processing signals in a common language’ (Haraway 1989: 187). There are no natural limits or functions for bodies or their parts, merely patterns of connection and operation that do not fall apart all at once.

Zoe . . . is the transversal force that cuts across and reconnects previously segregated species, categories and domains. Zoe-centered egalitarianism is, for me, the core of the post-anthropocentric turn: it is a materialist, secular, grounded and unsentimental response to the opportunistic trans-species commodification of Life that is the logic of advanced capitalism.

Of course, if anything can be co-opted for any function that its powers can sustain, one might ask how zoe can support a critique of advanced capitalism which, as Braidotti concedes, produces a form of the “posthuman” by radically disrupting the boundaries between humans, animals, species and technique. What could be greater expression of the zoe’s transversal potential than, say, Monsanto’s transgenic cotton Bollgard II? Bollgard II contains genes from the soil bacterium Bacillus thuringiensis that produce a toxin deadly to pests such as bollworm. Unless we believe that there is some Telos inherent to thuringiensis or to cotton that makes such transversal crossings aberrant – which Braidotti clearly does not – there appears to be no zoe-eyed perspective that could warrant her objection. Monsanto’s genetic engineers are just sensibly utilizing possibilities for connection that are already afforded by living systems but which cannot be realized without technological mediation (here via gene transfer technology). If the genes responsible for producing the toxin Bt in thuringiensis did not work in cotton and increase yields it would presumably not be the type used by the majority of farmers today (Ronald 2013).

Cognitive and biological capitalists like Google and Monsanto seem to incarnate the tendencies of zoe – conceived as a generalized possibility of connection – as much as the” not-for-profit” cyborg experimenters like Kevin Warwick or the publicly funded creators of HTML, Dolly the Sheep and Golden Rice. Doesn’t Google show us what a search engine can do?

We could object to Monsanto’s activities on the grounds that it has invidious social consequences or on the grounds that all technologies should be socially rather than corporately controlled. Neither of these arguments are obviously grounded in posthumanism or “zoe-centricism”  – Marxist humanists would presumably agree with the latter claim, for example.

However, we can find the traces of a zoe-centered argument in Deleuzean ethics explored in the essay “The Ethics of Becoming Imperceptible” (Braidotti 2006). This argues for an ethics oriented towards enabling entities to actualize their powers to their fullest “sustainable” extent. A becoming or actualization of power is sustainable if the assemblage or agency exercising it can do so without “destroying” the systems that makes its exercise possible. Thus an affirmative posthuman ethics follows Nietzsche in making it possible for subjects to exercise their powers to the edge but not beyond, where that exercise falters or where the system exercising it falls apart.

To live intensely and be alive to the nth degree pushes us to the extreme edge of mortality. This has implications for the question of the limits, which are in-built in the very embodied and embedded structure of the subject. The limits are those of one’s endurance – in the double sense of lasting in time and bearing the pain of confronting ‘Life” as zoe. The ethical subject is one that can bear this confrontation, cracking up a bit but without having its physical or affective intensity destroyed by it. Ethics consists in re-working the pain into threshold of sustainability, when and if possible: cracking, but holding it, still.

So Capitalism can be criticized from the zoe-centric position if it constrains powers that could be more fully realized in a different system of social organization. For Braidotti, the capitalist posthuman is constrained by the demands of possessive individualism and accumulation.

The perversity of advanced capitalism, and its undeniable success, consists in reattaching the potential for experimentation with new subject formations back to an overinflated notion of possessive individualism . . ., tied  to the profit principle. This is precisely the opposite direction from the non-profit experimentations with intensity, which I defend in my theory of posthuman subjectivity. The opportunistic political economy of bio-genetic capitalism turns Life/zoe – that is to say human and non-human intelligent matter – into a commodity for trade and profit (Braidotti 2013: 60-61).

Thus she supports “non-profit” experiments with contemporary subjectivity that show what “contemporary, biotechnologically mediated bodies are capable of doing” while resisting the neo-liberal appropriation of living entities as tradable commodities.

Whether the constraint claim is true depends on whether an independent non-capitalist posthuman (in Braidotti’s sense of the term) is possible or whether significant posthuman experimentation – particularly those involving sophisticated technologies like AI or Brain Computer Interfaces – will depend on the continued existence of a global capitalist technical system to support it. I admit to being agnostic about this. While modern technologies such as gene transfer do not seem essentially capitalist, there is little evidence to date that a noncapitalist system could develop them or their concomitant forms of hybridized “posthuman” more prolifically.

Nonetheless, there seems to be a significant ethical claim at issue here that can be used independently of its applicability to the critique of contemporary capitalism.

For example, I have recently argued for an overlap or convergence between critical posthumanism and Speculative Posthumanism: the claim that descendants of current humans could cease to be human by virtue of a history of technical augmentation (SP). Braidotti’s ethics of sustainability is pertinent here because SP in its strong form is also post-anthropocentric – it denies that posthuman possibility is structured a priori by human modes of thought or discourse – and because it defines the posthuman in terms of its power to escape from a socio-technical system organized around human-dependent ends (Roden 2012). The technological offspring described by SP will need to be functionally autonomous insofar as they will have to develop their own ends or modes of existence outside or beyond the human space of ends. Reaching “posthuman escape velocity” will require the cultivation and expression of powers in ways that are sustainable for such entities. This presupposes, of course, that we can have a conception of a subject or agent that is grounded in their embodied capacities or powers rather than general principles applicable to human agency. Understanding its ethical valence thus requires an affirmative conception of these powers that is not dependent on overhanging  anthropocentric ideas such as moral autonomy. Braidotti’s ethics of sustainability thus suggests some potentially viable terms of reference for formulating an ethics of becoming posthuman in the speculative sense.

References

Badmington, N. (2003) ‘Theorizing Posthumanism’, Cultural Critique 53 (Winter): 10-27.

Braidotti, R (2006), ‘The Ethics of Becoming Imperceptible”, in Deleuze and Philosophy, ed. Constantin Boundas, Edinburgh University Press: Edinburgh, 2006, pp. 133-159.

Braidotti, R (2013), The Posthuman, Cambridge: Polity Press.

Colebrook, Claire 2012a.), “A Globe of One’s Own: In Praise of the Flat Earth.” Substance: A Review of Theory & Literary Criticism 41 (1): 30–39.

Colebrook, Claire (2012b.), “Not Symbiosis, Not Now: Why Anthropogenic Change Is Not Really Human.” Oxford Lit Review 34 (2): 185–209.

Haraway, Donna (1989), ‘A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s’. Coming to Terms, Elizabeth Weed (ed.), London: Routledge, 173-204.

Hayles, K. N. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Roden, D. (2010). ‘Deconstruction and excision in philosophical posthumanism’. The Journal of Evolution & Technology, 21(1), 27-36.

Roden, D. (2012). ‘The Disconnection Thesis’. In Singularity Hypotheses (pp. 281-298). Springer Berlin Heidelberg.

Roden, D. (2013). ‘Nature’s Dark domain: an argument for a naturalized phenomenology’. Royal Institute of Philosophy Supplement, 72, 169-188.

Roden, R (2014). Posthuman Life: philosophy at the edge of the human. Acumen Publishing.

 

The Condition of the Digital Image

On April 12, 2013, in Uncategorized, by enemyin1

There’s a very interesting and instructive conversation between Daniel Rourke and new media artist Hito Steyerl at Rhizome. Reading Steyerl’s remarks on Renais’ and Marker’s migration from Celluloid to Web I imagined them  evoking perplexity and amusement in cold degenerate matter storage long after the death of our sun.

Epic Object-Oriented Flame War!

On March 15, 2013, in Uncategorized, by enemyin1

 

Roobarb

There’s an epic flame war over at Three Pound Brain in response to Scott Bakker’s discussion of Levi Bryant’s Object Oriented Ontology. I’m sitting this one out like my hero Custard the Cat. In part because, I’m just too busy and in part cos’ I don’t want to distract Scott from the trudge to Golgotterath and the moral necessity of euthanizing our immortal souls.

Autonomy and Modularity

On February 14, 2013, in Uncategorized, by enemyin1

Truthy-elections-diffusion-network-300x251

 

Autonomous systems of the kind that we can conceive as emerging from our technology are liable to be modular assemblages of elements that can couple opportunistically with other entities or systems, creating new assemblages whose powers and dispositions are transformed and dynamically put into play by such couplings.

The best way of representing modularity is in terms of networks consisting of nodes and their interconnections. A network is modular if it contains “highly interconnected clusters of nodes that are sparsely connected to nodes in other clusters” (Clunes, Mouret and Lipson 2013, 1). In autonomous assemblages modules support functional processes that make a distinct and specialized contribution to maintaining the conditions necessary for other interdependent processes within the assemblage.

Modules may or may not be spatially localized entities. They may be relatively fragmented while exhibiting dynamical cohesion. An instance of a software object class such as an “array” (an indexed list of objects of a single type) need not be instantiated on continuous regions on a computer’s physical memory. It does not matter where the data representing the array’s contents is stored is physically located so long as the more complex program which it composes can locate that data when it needs it. Thus while it is possible that all assemblages must have some spatially bounded parts – organelles in eukaryotic cells and distributors in internal combustion engines come in spatially bounded packages, for example – not all functionally discrete parts of assemblages need be spatially discrete in the way that organelles are. Cultural entities such as technologies or symbols may consist of repeatable or iterable patterns rather than things and may be conceived as repeatable particular events than objects (Roden 2004). Yet in systems – such as socio-technical networks – whose components cued to recognize and respond to patterns, such entities can exert real causal influence by being repeated into varying contexts.

Importantly for our purposes, dynamical cohesion should not be conflated with functional stability. An entity can retain its dynamical integrity and intrinsic powers while subtending distinct wide functional roles in the systems to which it belongs. To use, Don Ihde’s term: such entities are functionally “multistable”. An Acheulian hand axe – a technology used by humans for over a million years – might have been used as a scraper, a chopper or a projectile weapon.[1] Modern technologies such as mobile phones and computers are, of course, designed to be multistable; though their uses can exceed the specifications of their designers, as when a phone is used as a bomb detonator (Ihde 2012). It seems as if the decomposability of cognitive systems also confers multistability upon their parts thus contributing to the functional autonomy of the system as a whole.

In cognitive science, the classical modularity thesis held that human and animal minds contain encapsulated, fast and dirty, automatic (mandatory) domain-specific cognitive systems dedicated to specialized tasks such as kinship-evaluation, sentence-parsing or classifying life forms. However, it is an empirical question whether the mind is wholly or partly composed of domain-specific cognitive agents and, as Keith Frankish notes, a further empirical question whether neural modularity also holds: that is, whether domain-specified cognitive functions map onto anatomically discrete brain regions in the human brain such as Broca’s area (traditionally associated with language processing) or the so-called “Fusiform Face Area” (Frankish 2012, 280). Neither the classical theory of mental modules nor the neural modularity thesis follows from the fact that human brains are decomposable in the network sense presupposed by assemblage theory.

We should nonetheless expect autonomous entities such as present organisms or hypothetical posthumans to be network-decomposable assemblages rather than systems in which every part is equally coupled with every other part because modularity confers flexibility on known kinds of adaptive system.[2] For example, in biological populations modularity is recognized as one of the necessary conditions of evolvability “an organism’s capacity to generate heritable phenotypic variation.” (Kirschner and Gehart 1998, 8420). Some biologists argue that the transition from prokaryotic cells (whose DNA is not contained in a nucleus) to more complex eukaryotic cells (who have nucleated DNA as well as more specialized subsystems such as organelles) was accompanied by a decoupling of the processes of RNA transcription and subsequent translation into proteins. This may have allowed noncoding (intronic) RNA to assume regulatory roles necessary for producing more complex organisms because the separation of sites allows the intronic RNA to be spliced out of the messenger RNA where it might otherwise disrupt the production of proteins. If, as seems to be the case, regulatory portions of intronic DNA and RNA are necessary for the production of higher organisms, then this articulation in DNA expression may have allowed the ancestor populations of complex multi-cellular organisms to explore gene-regulation possibilities without disabling protein expression (Ruiz-Mirazo, Kepa and Moreno 2012, 39; Mattick 2004).

The benefits of articulation apply at higher levels of organization in living beings for reasons that may hold for autonomous “proto-ex-artefacts” poised for disconnection. Nervous systems need to be “dynamically decoupled” from the environment that they map and represent because perception, learning and memory rely on establishing specialized information channels and long term synaptic connections in the face of changing environmental stimulation. This entails a capacity “for cells to step back from the manifold of ambient stimulus and to be prepared to pick and choose which stimulus to make salient and thus in so doing a capacity to enjoy an unprecedented level of internal autonomy” (Moss 2006 932–934; Ruiz-Mirazo, Kepa and Moreno 2012, 44).[3]

Network decomposition of internal components also seems to carry advantages within control systems, including those that might actuate posthumans one day.  Research into locomotion in insects and arthropods shows that far from using a central control system to co-ordinate all the legs in a body, each leg tends to have its own pattern generator.

 

 

A coherent motion capable of supporting the body emerges from the excitatory and inhibitory actions of the distributed system rather than through co-ordination by a central controller. The evolutionary rationale for distributed control of locomotion can be painted in similar terms to that of the articulation of DNA transcription and expression considered above – a distributed system being far less fragile in the face of evolutionary tinkering than a central control architecture in which the function of each part is heavily dependent on those of other parts.

This rationale plausibly applies to human beings and as well as to our immediate primate ancestors, especially in the case of sophisticate cognitive feats that require the organism to learn specific cultural patterns – such as languages – which would not have been stable or invariant enough to have selected for the component abilities that they require over evolutionary time (Deacon 1997, 322-334  – the Visual Word Form Area is a particularly spectacular example of such “cultural recycling” – see below). While this is compatible with network decomposition it may not tally with the classical modularity thesis since it suggests an evolutionary rationale for the promiscuous re-use of functionally multistable components.

Evidence from functional imaging suggests that anatomically discrete regions like Broca’s or the Fusiform Area are co-opted by evolutionary and cultural processes in support of functionally disparate cognitive tasks. For example, relatively ancient areas in the human brain known to be involved in motor control are also involved in language understanding. This suggests that circuits associated with grasping the affordances and potentialities of objects were recruited over evolutionary time to meet the emerging cultural demands of symbolic communication (Anderson 2007, 14). In a recent target article on neural-reuse in Behavioural and Brain Sciences Michael Anderson cites research suggesting that older brain areas tend to be less domain specific and more multistable – that is, that they tend to get re-deployed in a wider variety of cognitive domains (Anderson 2010, 247). Peter Carruthers and Keith Frankish likewise argue that circuits in the visual and motor areas which have been initially involved in controlling and anticipating actions have become co-opted in the production and monitoring of propositional thinking (beliefs, desires, intentions, etc.) through the production of inner speech. A an explicit belief, for example, can be implemented as a globally available action-representation – an offline “rehearsal” of a verbal utterance – to which distinctive commitments to further action or inference can be undertaken (Carruthers 2008). Andy Clark cites experimental work on Pan troglodytes chimpanzees which comports with the Carruthers’ and Frankish’s assumption that cognitive systems adapted for pattern recognition and motor control can be opportunistically reused to bootstrap an organism’s cognitive abilities. Here, an experimental group of chimps were trained to associate two different plastic tokens with pairs of identical and pairs of different objects respectively. The experimental group were later able to solve a difficult second-order difference categorization task that defeated the control group of chimps who had not been trained to use the tokens:

The more abstract problem (which even we sometimes find initially difficult!) is to categorize pairs-of pairs of objects in terms of higher order sameness or different. Thus the appropriate judgement for pair-of-pairs “shoe/shoe and banana/shoe” is “different” because the relations exhibited within each pair are different. In shoe/shoe the (lower order) relation is “sameness”; in banana/shoe it is difference. Hence the higher-order relation – the relation between the relations – is difference (Clark 2003, 70).

Interestingly, Clark notes that the chimps in the experimental group were able to solve the problem without repeatedly using the physical tokens, suggesting that they were able to associate the “difference” and “sameness” with inner surrogates similar to the offline speech events posited by Carruthers and Frankish (71; See also Wheeler 2004).

This account of the emergence of specialized symbolic thinking and linguistic thinking via the reuse of neural circuits evolved for pattern recognition and motor-control illustrates a more general ontological schema. Assemblages – whether human, inhuman, animate or inanimate – inherit the capacity to couple with larger assemblages from their structure and components and are similarly constrained by those powers. Carbon atoms have the power to assemble complex molecular chains because their four valence electrons permit the formation of multiple chemical bonds. Simpler prokaryotic cells may lack the capacity to evolve the regulatory networks required to form multicellular affiliations because their encoding process is insufficiently differentiated. Likewise, although specific neural circuits may be inherently multistable it does not follow that each can do anything. Each may have specific “biases” or computational powers that reflect its evolutionary origins (Anderson 2010, 247). For example, Stanislas Dehaene and Laurent Cohen review some remarkable results suggesting the existence of a Visual Word Form Area, a culturally universal cortical map situated in the fusiform gyrus of temporal lobe, which is involved in the recognition of discrete and complex written characters independently of writing system.

As Dehaene and Cohen observe, it is not plausible to suppose that the VWFA evolved specifically to meet the demands of literate cultures since writing was invented only 5400 years ago, while only a fraction of humans have been able to read for most of this period (Dehaene and Cohen 2007, 384). Thus it appears that the cortical maps in the VWFA have structural properties which make them ideal for reuse in script recognition despite not having evolved for the representation of written characters (among the factors suggested is that the VWFA is located in a part of the Fusiform area receptive to highly discriminate visual input from the fovea – 389).

Coupling an assemblage with another system – e.g. a transcultural code such as a writing or number system – may, of course, increase the functional autonomy of system by allowing it to respond fluidly and adaptively to the demands of its environment – enlisting new affiliations and resources which, then, come to be functional for it. Literacy and numeracy have become functionally necessary for economic activity in advanced industrial societies – clearly this was not always so! However, this is only possible because both the assemblage and its parts are open to functional shifts that, in effect, allow the creation of new social “megamachines” which extend beyond the coupled individuals. Thus while complex assemblages articulated into lots of functionally open systems may be more functionally autonomous than less articulated ones – are more capable of accruing new functions –they are more apt to be “deterritorialized” by happening on new modes of existence and new ways of being affected (DeLanda 2006, 50-51).

References:

Anderson, Michael (2007). “Massive redeployment, exaptation, and the functional integration of cognitive operations”. Synthese, 159(3): 329-345,

Anderson, M. L. (2010). “Neural reuse: A fundamental organizational principle of the brain.” Behavioral and Brain Sciences, 33(4), 245.

Carruthers, Peter (2008). “An architecture for dual reasoning”. In J. Evans & K. Frankish (eds.), In Two Minds: Dual Processes and Beyond. Oxford University Press.

Clark, Andy (2003). Natural Born Cyborgs. Oxford: Oxford University Press.

Clune, J., Mouret, J. B., & Lipson, H. (2012). “The evolutionary origins of modularity”. arXiv preprint arXiv:1207.2743.

Deacon, Terrence. 1997. The Symbolic Species: The Co-evolution of Language and the Human Brain . London: Penguin.

Dehaene, S., & Cohen, L. (2007). Cultural recycling of cortical maps. Neuron,56(2), 384-398.

DeLanda, M. (2006), A New Philosophy of Society: Assemblage Theory and Social Complexity, London: Continuum.

Frankish, Keith (2012). “Cognitive Capacities, Mental Modules, and Neural Regions”. Philosophy, Psychiatry, and Psychology 18 (4).

Ihde, D. (2012). “Can Continental Philosophy Deal with the New Technologies?” Journal Of Speculative Philosophy, 26(2), 321-332.

Kirschner Marc and Gehart, John (1998). “Evolvability”, Proceedings of the National Academy of Sciences USA, 95, 8420-8427.

Moss, L. (2006). “Redundancy, plasticity, and detachment: The implications of comparative genomics for evolutionary thinking”. Philosophy of Science, 73, 930–946.

Roden, David (2004). ‘Radical Quotation and Real Repetition’, Ratio (new series) XVII 2 June 2004, 191-206.

Ruiz-Mirazo, Kepa & Moreno, Alvaro (2012). “Autonomy in evolution: from minimal to complex life”. Synthese 185 (1):21-52.

Wheeler, M. (2004). “Is language the ultimate artefact?.” Language Sciences, 26(6), 693-715.

 


[1] See Don Ihde, “Embodiment and Multistability”, http://vimeo.com/49101825, accessed 14/02/2013.

[2] One of the benefits of so-called “objected oriented” programming languages (OO) like Java over “procedural” programming languages such as COBOL is that OO programs organize software objects in encapsulated modules. When a client object in the program has to access an object (e.g. a data structure such a list) it sends a message to the object that activates one of the objects “public” methods (e.g. the client might “tell” the object to return an element stored in it, add a new element or carry out an operation on existing elements). However, the client’s message does not specify how the operation is to be performed. This is specified in the code for the object. From the perspective of the client, the object is a black box that can be activated by public messages yielding a consumable output. This means that changes in how the proprietary methods of the object are implemented do not force developers to change the code in other parts of the program since these do not “matter” to the other objects. Maintenance and development of software systems becomes simpler.

[3] The cochlear cells in our inner are connected to hair like cells which are receptive to sound vibrations. This specialized arrangement allows the cochlear to conduct a fast spectrum analysis on incoming vibrations, assaying the relative amplitudes of components in complex sounds.

An Allegory of Phenomenology

On February 10, 2013, in Uncategorized, by enemyin1

A cinematic response to Scott Bakker’s excellent discussion of Dennett and informational neglect here