Evan Thompson on Dark Phenomena

On July 15, 2014, in Uncategorized, by enemyin1

Mies_van_der_Rohe_Barcelona_Pavillion

In a Notre Dame review of Phenomenology and Naturalism: Examining the Relationship between Human Experience and Nature, edited by Havi Carel and Darian Meacham, Evan Thomson criticizes my claim that the existence of dark phenomenology implies that phenomenology must be a naturalistic discipline without transcendental warrant. He is correct about my aims and provides a neat summary of my account of dark phenomenology:

David Roden argues that phenomenology should be retained only as a descriptive, empirical method for providing data about experience. This method must be recognized as limited, because it cannot penetrate “dark phenomena” that are not available to introspection or reflective intuition, such as very fine-grained perceptual discriminations of shades of color that cannot be held in memory, or the deep structure of temporal experience. Roden’s discussion of these dark phenomena is illuminating, but his conclusion about the status of phenomenology does not follow. Although he is right that phenomenology cannot be a completely autonomous investigation, but rather must be informed by experimental investigations, it hardly follows that all that phenomenology can do is provide data about what is available to introspection. On the contrary, as the articles by Zahavi, Ratcliffe, Wheeler, and Morris demonstrate, phenomenology can provide new concepts and models for enriching our understanding of nature.

However, I don’t think Thomson’s objection will do as it stands. The position developed in “Nature’s Dark Domain” is consistent with phenomenology being conceptually productive and revealing about nature. If phenomenology is not completely “dark”, it could not be otherwise. I only argue that phenomenological reflection cannot provide future proof (a priori) grounds for claims about invariants of experience or being because – alone and unaided – it cannot tell us what our phenomenology is.

For this reason, my position differs from Mike Wheeler’s “Science Friction: Phenomenology, Naturalism and Cognitive Science” from the same volume. There Wheeler argues that transcendental phenomenology can unpack the “constitutive” conditions of cognition and agency – which tell us what it is, in general, to be an agent or a cognizer – while cognitive science reveals the causal “enabling” conditions for cognition and agency. For example, he claims that Heidegger’s phenomenology of coping is illuminated by experiments in situated robotics using action-oriented representations – which represent an agent’s world in terms of the way it interacts with its body.

So the transcendental/constitutive conditions for agency may require that contextual relevance and an understanding of affordances is necessary for agency, while action-oriented representations reveal one way in which contextual relevance is enabled in representational mechanisms (Wheeler 2013: 143, 152; 2005 197).

According to Wheeler, this model furnishes a minimal naturalism which “domesticates” the transcendental: constitutive conditions are subject to empirically-motivated revision.

However, the kind of revision that Wheeler envisages in his essay seems modest. For example, Heidegger’s account of temporality as thrownness implies that the human agent always encounters the world “embedded within a pre-structured field of intelligibility into which she has been enculturated.” (Wheeler 2013: 158) Wheeler allows that both the mechanisms and the cultural forms of this field can be revealed scientifically (e.g. via cognitive science or ethology):

A consequence of this temporality-driven cultural conditioning of the transcendental is that although there will be specific factors that are transcendentally presupposed by any particular act of sense-making there is no expectation that those factors will be permanently fixed for all human psychological phenomena across space and time (160)

Earlier in his essay, Wheeler provides a succinct account of the epistemological commitments of naturalism: namely that for the naturalist, science and philosophy are continuous. If so, there is no point in this continuum that can be immune from revision in principle – even transcendental claims about the structure of temporality in human agents. It follows that all constitutive claims are empirically defeasible. There is no interesting epistemological boundary to be called between the transcendentally constitutive structure and the various “fillers” for that structure revealed by science Now, this is just what we would expect if – as I argue – the deep structures posited by phenomenology give only limited insight to bare reflection or phenomenological interpretation.

Thus if the deep structure of lived time is not given to us we have a limited first-person grasp of its nature and scope. A deconstructive reading of Heideggerian temporality, for example, implies that the differential or “ecstatic” model of temporality generalizes well beyond transcendental subjects to structures of “generalized writing” found at all levels of biological and technological existence (Stiegler 1998; Hägglund 2008, 2011). The point being not that deconstruction provides a wider-ranging transcendental warrant but that it reveals an indeterminacy in the more narrowly phenomenological ones. If we do not know what temporality is or what must “have it”, we cannot claim to know that all serious agents must have a culturally pre-structured field, for we have produced only a loose, holistic  model of a process whose underlying nature is not reflectively available to us, and which may not even be holistic in the phenomenological sense. If the depth-structure of temporality is dark, the constitutive features of all the phenomena where it is supposedly involved as are also occluded. Thus claims about constitutive conditions of cognition and agency are fodder for empirical defeat even where they yield passing insight into nature.

References

Hägglund, M. 2008. Radical Atheism: Derrida and the Time of Life. Stanford, CA: Stanford University Press.

____2011. “The Trace of Time and the Death of Life: Bergson, Heidegger, Derrida”, https://www.youtube.com/watch?v=9qqaHGUiew4 (accessed November 2011).

Roden, D. 2013. Nature’s Dark Domain: an Argument for a Naturalised Phenomenology. Royal Institute of Philosophy Supplement, 72, 169-188.

Stiegler, B. 1998. Technics and Time, 1: The Fault of Epimetheus, Vol. 1. Stanford, CA: Stanford University Press.

Wheeler, M. 2005. Reconstructing the Cognitive World: The Next Step. Cambridge, MA: MIT Press.

____ 2013. Science Friction: Phenomenology, Naturalism and Cognitive Science. Royal Institute of Philosophy Supplement, 72, 135-167.

 

 

Tagged with:
 

Epistemic indeterminacy concerns our representations of things rather than things. Thus the location of a mobile phone with a nokia ring tone may be represented as indeterminate between your pocket and your neighbor’s handbag. This epistemic indeterminacy is resolvable through the acquisition of new information: here, by examining the two containers. By contrast metaphysical indeterminacy – if such there be – is brute. It cannot be cleared up by further investigations.

We can thus distinguish between being indeterminately represented and being indeterminately ?  in situations where it is possible to progressively reduce and eliminate the former indeterminacy (Roden 2010: 153).

Facts are metaphysically indeterminate if they involve indeterminate natures. The nature of a thing is indeterminate if it is impossible to determine it via some truth-generating procedure that will eliminate competing descriptions of it. Clearly, some will cavil with my use of “fact” and “nature” either because they see “facts” as ineluctably propositional or because they have nominalist quibbles about attributing any kind of nature or facticity to the non-conceptual sphere. However, like Marcus Arvan, I don’t see any conceptual affiliation as ineluctable. If the world is structured in ways that cannot be captured without remainder in propositions, it is not inappropriate to use the term “fact” to describe these structures – or so I will proceed to do here.

My favorite case of putative metaphysical indeterminacy are the two versions of the Located Events Theory of sound. LET1 (Bullot et al 2004; Casati and Dokic 2005) states that sounds are resonance events in objects; LET2 says that sounds are disturbances in a medium caused by vibrating objects (O’Callaghan 2009). According to LET1 there are sounds in vacuums so long as there are objects located in them. According to LET2 there are not. So the theories have different implications. There is also nothing to obviously favour the one over the other in the light of ordinary observations and inferences regarding sound.

As I put in in “Sonic Events” most people would probably judge that there is no sound produced when a turning fork resonates in an evacuated jar – “Yet were the air in a jar containing a vibrating tuning fork to be regularly evacuated and replenished we might perceive this as an alteration in the conditions of audition of a continuous sound, rather than the alternating presence and absence of successive sounds” ( Roden 2010: 156). You pays yer money, but it’s hard to believe that the world cares how we describe this state of affairs, or that persuasive grounds will settle the matter one bright day.

Anti-realists might say that this indeterminacy is practical rather than factive. It reflects discrepant uses of the same lexical item (“sound”) only. So (as in the case of metaphysical indeterminacy) there is no information gathering procedure that would settle the issue. But that is not because the nature of sound is indeterminate in this respect. Rather, there is no deeper (determinate or indeterminate) fact here at all.

However, this ignores the fact that LET1 and LET2 are responsive to an auditory reality that they both describe, albeit in incompatible ways. Sounds existed before there were ontologies of sound and thus have an independent reality to which LET1 and LET2 attest. If so there must be a deeper fact which accounts for the indeterminacy.

Now, either this fact is indeterminate or it is not.

If it is not, then there is some uniquely ideal account of sound: ITS. The ideal theory cannot be improved via the acquisition of further information because it already contains all the relevant information there is to be had and has no empirically equivalent competitors (there is no ITS2, etc.). ITS might or might not be an event theory – e.g. it could be a “medial theory” which represents sounds as the transmission of acoustic energy (Bullot et al. 2004). So ITS ought to replace both LET1 and LET2. We may not be aware of it, but we know that it exists somewhere in Philosophers Heaven (or the Space of Reasons).

If the fact in question is indeterminate, there is no ideal account which captures the nature of sound. Or rather, the best way to capture it is in the alternation between different accounts.

Given indeterminacy, then, there is an auditory reality which permits of description, but which cannot be completely described.

There is an interesting comparison to be made here between the indeterminacy of auditory metaphysics and the claims regarding the indeterminacy of semantic interpretation described in Davidson and others. Again, one can take indeterminacy in a deflationary anti-realist spirit – there are no semantic facts, just competing interpretations and explications recursively subject to competing interpretations ad infinitum (One popular way of glossing Derridean différance!).

Or there are semantic facts. In which case, these may be determinate or indeterminate. If there are determinate semantic facts, then the indeterminacy of radical interpretation is an artefact of our ignorance regarding semantic facts. If semantic facts are indeterminate, however, there is – again – a reality that is partially captured in competing interpretations that is never fully mirrored or reflected in them.

At this point it is interesting to consider why we might opt for factive or metaphysical indeterminacy rather than anti-realist indeterminacy. If we have reasons for believing in indeterminate facts – the ones for which there are irreducibly discrepant descriptions – this is presumably because we think there is some mind-independent reality outside our descriptions whose nature is indeterminate in some respects. If this thought is justified it is presumably not justified by any single description of the relevant domain. Nor by the underdetermination of descriptions (since this is equally consistent with anti-realism). So if we are justified in believing that there are indeterminate metaphysical facts, we must be justified by sources of non-propositional knowledge. For example, perhaps our perceptual experience of sound supports the claim that sounds occur in ways that can be captured by LET1 or LET2 without providing decisive grounds for one or the other.

This train of thought might suggest that some metaphysics bottoms out in “phenomenology” – which seems to commit the metaphysical indeterminist to the “mental eye” theory of pre-discursive concepts disparaged by Sellars and others. However, what is at issue, here, is non-propositional access to the world. One way of saying this is that such access “non-conceptual” – though this seems to presuppose that concepts (whatever they are) are components of or parasitic on propositions, and this may not be the case.

However, there is a further problem. If Scott Bakker and I are right, our grip on phenomenology is extremely tenuous (Roden 2013). So if metaphysical indeterminism is warranted, there are non-discursive reasons for believing there are metaphysically indeterminate facts. But the nature of these facts is obscure so long as our phenomenology is occluded. Now, there is no reason in principle why a subject can believe p on the basis of some evidence without being in a position to explain how the evidence supports p. This weakens their public warrant but does not vitiate it. So we may have weak grounds for metaphysical indeterminism but these are better than no grounds at all.

References

Bullot, Nicolas, Roberto Casati, Jérôme Dokic, and Maurizio Giri. 2004. Sounding objects. In Proceedings of Les journées du design sonore, p. 4. Paris. October 13–15.

Casati, Robert, and Dokic, Jérôme. 2005. la philosophie du son, http://jeannicod.ccsd.cnrs.fr. Accessed 3 June 2005, Chapter 3, p. 41.

O’Callaghan, Casey. 2009. Sounds and events. In Matthew Nudds & Casey O’Callaghan (eds.), Sounds and Perception: New Philosophical Essays. Oxford University Press. 26–49.

Roden, David. 2010. ‘Sonic Art and the Nature of Sonic Events’, Objects and Sound Perception, Review of Philosophy and Psychology 1(1): 141-156.

Roden, David. 2013, ‘Nature’s Dark Domain: An Argument for a Naturalized Phenomenology’, Royal Institute of Philosophy Supplement 72 (1): 169-88

 

Tagged with:
 

No Future? Catherine Malabou on the Humanities

On February 19, 2014, in Uncategorized, by enemyin1

Catherine Malabou has an intriguing piece on the vexed question of the relationship between the “humanities” and science in the journal Transeuropeennes here.

It is dominated by a clear and subtle reading of Kant, Foucault and Derrida’s discussion of the meaning of Enlightenment and modernity. Malabou argues that the latter thinkers attempt to escape Kantian assumptions about human invariance by identifying the humanities with “plasticity itself”. The Humanities need not style themselves in terms of some invariant essence of humanity. They can be understood as a site of transformation and “deconstruction” as such.  Thus for Derrida in “University Without Condition”, the task of the humanities is:

the deconstruction of « what is proper to man » or to humanism. The transgression of the transcendental implies that the very notion of limit or frontier will proceed from a contingent, that is, historical, mutable, and changing deconstruction of the frontier of the « proper ».

Where, as for Foucault, the deconstruction of the human involves exhibiting its historical conditions of possibility and experimenting with these by, for example, thinking about “our ways of being, thinking, the relation to authority, relations between the sexes, the way in which we perceive insanity or illness “.

This analysis might suggest that the Humanities have little to fear from technological and scientific transformations of humans bodies or minds; they are just the setting in which the implications of these alterations are hammered out.

This line of thought reminds me of a revealingly bad argument produced by Andy Clark in his Natural Born Cyborgs:

The promise, or perhaps threatened, transition to a world of wired humans and semi-intelligent gadgets is just one more move in an ancient game . . . We are already masters at incorporating nonbiological stuff and structure deep into our physical and cognitive routines. To appreciate this is to cease to believe in any post-human future and to resist the temptation to define ourselves in brutal opposition to the very worlds in which so many of us now live, love and work (Clark 2003, 142).

This is obviously broken-backed: that earlier bootstrapping didn’t produce posthumans doesn’t entail  that future ones won’t. Even if humans are essentially self-modifying it doesn’t follow that any prospective self-modifying entity is human.

The same problem afflicts Foucault and Derrida’s attempts to hollow out a reservation for humanities scholars by identifying them with the promulgation of transgression or deconstruction. Identifying the humanities with plasticity as such throws the portals of possibility so wide that it can only refer to an abstract possibility space whose contents and topology remains closed to us. If, with Malabou, we allow that some of these transgressions will operate on the material substrate of life, then we cannot assume that its future configurations will resemble human communities or human thinkers – thinkers concerned with topics like sex, work and death for example.

Malabou concludes with the suggestion that Foucault and Derrida fail to confront a quite different problem. They do not provide a historical explanation of the possibility of transformations of life and mind to which they refer:

They both speak of historical transformations of criticism without specifying them. I think that the event that made the plastic change of plasticity possible was for a major part the discovery of a still unheard of plasticity in the middle of the XXth century, and that has become visible and obvious only recently, i.e. the plasticity of the brain that worked in a way behind continental philosophy’s back. The transformation of the transcendental into a plastic material did not come from within the Humanities. It came precisely from the outside of the Humanities, with again, the notion of neural plasticity. I am not saying that the plasticity of the human as to be reduced to a series of neural patterns, nor that the future of the humanities consists in their becoming scientific, even if neuroscience tends to overpower the fields of human sciences (let’s think of neurolinguistics, neuropsychoanalysis, neuroaesthetics, or of neurophilosophy), I only say that the Humanities had not for the moment taken into account the fact that the brain is the only organ that grows, develops and maintains itself in changing itself, in transforming constantly its own structure and shape. We may evoke on that point a book by Norman Doidge, The Brain that changes itself. Doidge shows that this changing, self-fashioning organ is compelling us to elaborate new paradigms of transformation.

I’m happy to concede that the brain is a special case of biological plasticity, but, as Eileen Joy notes elsewhere, the suggestion that the humanities have been out of touch with scientific work on the brain is unmotivated. The engagement between the humanities (or philosophy, at least) and neuroscience already includes work as diverse as Paul and Patricia Churchland’s work on neurophilosophy and Derrida’s early writings on Freud’s Scientific Project.

I’m also puzzled by the suggestion that we need to preserve a place for transcendental thinking at all here. Our posthuman predicament consists in the realization that we are alterable configurations of matter and that our powers of self-alteration are changing in ways that put the future of human thought and communal life in doubt. This is not a transcendental claim. It’s a truistic generalisation which tells us little about the cosmic fate of an ill-assorted grab bag of  academic disciplines.

References

Clark, A. 2003. Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press.

 

 

 

 

 

 

CFP: SEP-FEP 2014 Utrecht, 3-5 September

February 19th, 2014 | Author: johnm

CALL FOR PAPERS

The Society for European Philosophy and Forum for European Philosophy


Joint Annual Conference

 

Philosophy After Nature

Utrecht University

3-5 September 2014

The Joint Annual Conference of The Society for European Philosophy and Forum for European Philosophy in 2014 will be hosted by the Centre for the Humanities, the Faculty of Humanities and the Descartes Institute, Utrecht University, the Netherlands.

Plenary speakers
Professor Michel Serres, Stanford University, Académie française

Information and Thinking/l’information et la pensée

respondent: Professor Françoise Balibar, Université Paris-Diderot

Professor Rahel Jaeggi, Humboldt-Universität zu Berlin

Critique of Forms of Life

respondent: t.b.a.

Professor Mark B.N. Hansen, Duke University
Entangled in Media, Towards a Speculative Phenomenology of Microtemporal Operations

respondent: t.b.a.

The SEP/FEP conference is the largest annual event in Europe that aims to bring together researchers, teachers and others, from different disciplines, interested in all areas of contemporary European philosophy. Submissions are therefore invited for individual papers and panel sessions in all areas of contemporary European philosophy. For 2014, submissions that address the conference’s plenary theme – Philosophy After Nature – are particularly encouraged. This would include papers and panels that are after nature in the sense of being in pursuit of nature’s consequences. We invite perspectives on critique, science, ecology, technology and subjectivity as bound up with conceptions of nature and  experiment with various positions in contemporary thought.

Abstracts of 500 words for individual paper submissions and proposals for panels should be sent to Rick Dolphijn (philosophyafternature@uu.nl) by 17 May 2014. Proposals for panels should include a 500-word abstract for each paper within the panel. Proposals from academics, graduate students and independent scholars are welcome.
Conference committee: Rosi Braidotti, Bert van den Brink, Rick Dolphijn, Iris van der Tuin and Paul Ziche.

Enquiries: Rick Dolphijn (philosophyafternature@uu.nl)

Tagged with:
 

People and cultures have some non-overlapping beliefs. Some folk believe that there is a God, some that there is no God, some that there are many gods. Some people believe that personal autonomy is a paramount value, while others feel that virtues like honour and courage take precedence over personal freedom. These core beliefs are serious, in that they make a difference to whether people live or die, or are able to live the kinds of life that they wish. People fight and die for the sake of autonomy. People fight, die or (institute gang rapes) in the interests of personal honour.

Some folk – the self-styled pluralists – believe that respect for otherness is a paramount political value. Respecting otherness, they say, is so paramount that it should regulate our ontological commitments – our assumptions about what exists. I must admit that I find this hard to credit ontologically or ethically. But it is also unclear how we should spell the principle out. So I’ll consider two versions that have circulated in the blogosphere recently. The first, I will argue, teeters on incoherence or, where not incoherent, is hard to justify in ethical or political terms. The second – which demands that we build a common world – may also be incoherent, but I will argue that we have no reason to think that its ultimate goal is realisable.

According to Philip at Circling Squares Isabel Stengers and Bruno Latour think that this position should enjoin us to avoid ridiculing or undermining others’ values or ontologies. Further, that we should:

grant that all entities exist and, second, that to say that someone’s cherished idol (or whatever disputed entity they hold dear) is non-existent is a ‘declaration of war’ – ‘this means war,’ as Stengers often says.

I’ll admit that I find first part of this principle this damn puzzling. Even if we assume – for now – that it is wrong to attempt to undermine another person’s central beliefs this principle seems to require a) that people actually embrace ontological commitments that are contrary to the one’s they adhere to; b) pretend not to have one’s core beliefs; c) adopt some position of public neutrality vis a vis all core beliefs.

The first interpretation (a) results in the principle that one should embrace the contrary of every core belief; or, in effect, that no one should believe everything. So (in the interests of charity) we should pass on.

b) allows us to have beliefs so long as they are unexpressed. Depending on your view of beliefs, this is either incoherent (because there are no inexpressible beliefs) or burdens believers that no one is likely to find it acceptable.

So I take Philip to embrace c).  His clarification suggests something along these lines. For example. He claims that it is consistent with respecting otherness to say what we believe about other’s idols but not to publicly undermine their reasons for believing in them. Thus:

Their basic claim seems to be that ‘respect for otherness,’ i.e. political pluralism, can only come from granting the entities that others hold dear an ontology, even if you don’t ‘believe’ in them.  You are thus permitted to say ‘I do not follow that god, he has no hold over me’ but you are not permitted to say ‘your god is an inane, infantile, non-existent fantasy, grow up.’  And it’s not just a question of politeness (although there’s that too).  The point is to grant others’ idols and deities an existence – one needn’t agree over what that existence entails, over what capacities that entity has or what obligations it impresses upon you as someone in its partial presence but to deny it existence entirely is to ‘declare war’ – to deny the possibility of civil discourse, of pluralistic co-existence.

I must admit that I find this principle of respect puzzling as well. After all, some of my reasons for being an atheist are also reasons against being a theist. So unless this is just an innocuous plea for good manners (which I’m happy to sign up to on condition that notional others show me and mine the same forbearance) it seems to require that all believers keep their reasons for their belief to themselves. This, again, seems to demand an impossible or repugnant quietism.

So, thus far, ontological pluralism seems to be either incoherent or to impose such burdens on all believers that nobody should be required to observe it. There is, of course, a philosophical precedent for restricted ontological quietism in Rawls’ political liberalism. Rawls’ proposes that reasonable public deliberation recognize the “burdens of judgement” by omitted any justification that hinges on “comprehensive” ethical or religious doctrines over which there can be reasonable disagreement (Rawls 2005, 54). Deliberations about justice under Political Liberalism are thus constrained to be neutral towards “conflicting worldviews” so long as they are tolerant and reasonable (Habermas 1995, 119, 124-5).

However, there is an important difference between the political motivations behind Rawlsian public reason and the position of “ontological charity” Philip attributes to Stengers and Latour. Rawls’ is motivated by the need to preserve stability within plural democratic societies. Public reason does not apply outside the domain of political discourse in which reasonable citizens hash out basic principles of justice and constitutional essentials. It is also extremely problematic in itself.  Habermas  argues that Rawls exclusion of plural ethical or religious beliefs from the public court is self-vitiating because comprehensive perspectives are sources of disagreement about shared principles (for example, the legitimacy of abortion or same-sex marriage) and these must accordingly be addressed through dialogue rather than circumvented if a politically stable consensus is to be achieved (126).

Finally, apart from being incoherent, the principle of ontological charity seems unnecessary. As Levi Bryant points out in his realist retort to the pluralist, people are not the sum of their beliefs. Beliefs can be revised without effacing the believer. Thus an attack on core beliefs is not an attack on the person holding those beliefs.

So it is hard to interpret the claim that we should grant the existence of others’ “idols” as much more than the principle that it is wrong to humiliate, ridicule or insult people because of what their beliefs are. This seems like a good rule of thumb, but it is hard to justify the claim that it is an overriding principle. For example, even if  Rushdie’s Satanic Verses “insults Islam” having an open society in which aesthetic experimentation and the critical evaluation of ideas is possible is just more important than saving certain sections of it from cognitive dissonance or intellectual discomfort. Too many people have suffered death, terror and agony because others had aberrant and false core beliefs to make it plausible that these should be immune from criticism or ridicule. A little personal dissonance is a small price to pay for not going to the oven.

So what of the principle that we should build a “common world”. This is set out by Jeremy Trombley in his Struggle Forever blog under the rubric of “cosmopolitics”. Jeremy regards this project as an infinite task that requires us to seek a kind of fusion between different word views, phenomenologies and ontologies:

The project, as Latour, Stengers, James, and others have described it, is to compose a common world. What pluralism recognizes is that, in this project, we all start from different places – Latour’s relativity rather than relativism. The goal, then, (and it has to be recognized that this project is always contingent and prone to failure) is to make these different positions converge, but in a way that doesn’t impose one upon the other as the Modern Nature/Culture dichotomy tends to do. Why should we avoid imposing one on the other? In part because it’s the right thing to do – by imposing we remove or reduce the agency of the other. The claim to unmediated access to reality makes us invulnerable – no other claim has that grounding, and therefore we can never be wrong. But we are wrong – the science of the Enlightenment gave us climate change, environmental destruction, imperialism in the name of rationality (indigenous peoples removed from their land and taken to reeducation facilities where they were taught “rational” economic activities such as farming), and so on. It removed us from the world and placed us above it – the God’s eye view.

I think there a number of things wrong with cosmopolitics as Jeremy describes it here.

Firstly, seeking to alter beliefs or values does not necessarily reduce agency because people are not their beliefs.

Secondly, some worldviews – like the racist belief-systems that supported the European slave trade – just need to be imposed upon because they are bound up with violent and corrupting socio-political systems.

Thirdly, I know of no Enlightenment thinker, or realist, for whom “unmediated access to reality” is a sine qua non for knowledge. Let’s assume that “realism” is the contrary of pluralism here. It’s not clear what unmediated access would be like, but all realists are committed to the view that we we don’t have it since if we believe that reality has a mind-independent existence and nature, it can presumably vary independently of our beliefs about it. In its place, we have various doctrines of evidence and argument that are themselves susceptible to revision.  Some analyses of realism suppose that realists are committed to the claim that there is a one true account of the world (the God’s Eye View) but – as pointed out in an earlier post – this commitment is  debatable. In any case, supposing the the existence of a uniquely true theory is very different from claiming to have it.

Finally, much hinges on what we mean by a common world here. I take it that it is not the largely mind-independent reality assumed by the realist since – being largely mind-independent – it exists quite independently of any political project. So I take it that Jeremy is adverting something like a shared phenomenology or experience: a kind of fusion of horizons at the end of time. If we inflect “world” in this sense, then there is no reason for believing that such an aim is possible, let alone coherent. This possibility depends on there being structures of worldhood that are common to all beings that can be said to have one (Daseins, say). I’ve argued that there are no reasons for holding that we have access to such a priori knowledge because – like Scott Bakker - I hold that phenomenology gives us very limited insight into its nature. Thus we have no a priori grasp of what a world is and no reason to believe that Daseins (human or nonhuman) could ever participate in the same one. The argument for this is lengthy so I refer the reader to my paper “Nature’s Dark Domain” and my forthcoming book Posthuman Life.

References

Habermas, Jurgen. 1995. “Reconciliation through the Public Use of Reason: Remarks on John Rawls’s Political Liberalism.” The Journal of Philosophy 92 (3): 109–131.

Rawls, John. 2005. Political Liberalism. Columbia University Press.

 

 

 

Tagged with:
 

On November 29, 2013, in Uncategorized, by enemyin1

Nature, has just published a dark philosophical tale by leading philosopher of mind Eric Schwitzgebel and Three Pound Brainer Scott Bakker. Enjoy!

Cetacean Personhood

On November 22, 2013, in Uncategorized, by enemyin1

Filippo Bertoni and Uli Beisel have written an interesting and intriguing essay on the ethical challenges posed by extending person-status to cetaceans: “More-than-human intelligence: of dolphins, Indian law and the multispecies turn” over at Society and Space. I only wish the terms in which their discussion is framed had been clearer. For example, they provide no working conception of personhood – though there are many philosophical candidates – and their passing allusion to sharing  “somatic sensibilities” is similarly vague. This is a pity because the philosophical and ethical problem they raise is genuine. Should we regard human-style personhood as the tip of an cosmic moral hierarchy from which the only way is down? If not, then how do we negotiate relationships between beings with lives that are phenomenologically different but perhaps not less valuable than humans?

 

Tagged with:
 

Xenakis and the Missing Structure

On April 20, 2013, in Uncategorized, by enemyin1

Loop

[A slightly edited extract from my paper "Nature's Dark Domain: an Argument for a Naturalised Phenomenology". Royal Institute Of Philosophy Supplement [serial online]. July 2013;72:169-188 with audio!]

Most listeners will readily distinguish an eight second sequence from Xenakis’ pioneering ‘granular’ composition Concret Ph.

ConcSequence

and a loop that repeats the first one-second slice of it for eight seconds.

ConLoop

This is discernible because of the obvious repetition in pitch and dynamics.

Telling the looped sequence from the non-looped sequence is not the same as acquiring subjective identity conditions that would allow us to recognise the extra structure distinguishing the non-looped from the looped sequence in a different context (e.g. within the entirety of Concret Ph). What is discerned here is arguably a fact about the shortfall between type-identifiable phenomenology and non type-identifiable phenomenology (“unintuitable” or “dark” phenomenology).

As an illustration of this, the mere awareness that there is missing structure in the loop does not help settle the issue between virtualist and occurentist construals of that structure. It is plausible to suppose that the perceptual awareness of the missing structure in the Xenakis loop consists of virtual contents – a representation of tendencies in the developing sound rather than something like a constantly updated encoding of discrete sonic facts [1]. Indeed the virtual model would be consistent with the widely held assumption that our representation of temporal structure is accomplished via recurrent neural architecture that modulates each current input by feeding back earlier input.[2] But whether the contents of representations of temporal structure are virtual or occurrent in nature has no direct bearing on their conceptual or intuitive accessibility.

 


[1]Tim Van Gelder, ‘Wooden Iron? Husserlian Phenomenology Meets Cognitive Science’, Electronic Journal of Analytic Philosophy, 4, 1996.

[2]Op. cit.

Tagged with:
 

Phenomenology and Naturalism

On April 4, 2013, in Uncategorized, by enemyin1

My article “Nature’s Dark Domain: An Argument for a Naturalized Phenomenology” is out in Havi Carel’s and Darian Meachan’s Phenomenology and Naturalism collection. Other authors include Michael Wheeler, Alison Assiter, Rudolph Bernet, Dermot Moran, and Iain Grant.

Tagged with: