Since Philperc’s Posthuman Life reading group got into gear a month ago, I’ve been dealing with numerous objections to the theses in Posthuman Life. But I’ve not been beset in quite the way I had expected. In my simplicity, I had assumed that the epistemological claims for unbounded posthumanism developed in Chapters 3 and 4 (and in later work on Brandom and hyperplasticity) would be attracting flak from analytical pragmatists and phenomenologists who want to retain a priori constraints on (post)human possibility. Somewhat to my surprise, fire has been concentrated on the positive thesis of SP, and the disconnection thesis (DT) in particular.
Retrospectively, it shouldn’t be all that shocking. The DT is a big, lumbering target. As Rick Searle observes in his review on the IEET site, it is an attempt to impose conceptual uniformity on unknown but conceivably highly diverse conditions while taking full account of our dated ignorance of posthuman natures. The fact it attempts to lay out clear satisfaction conditions for posthumanity is like big “Hit Me!” sign inviting counter-examples, problem cases and deconstructions. Something had to give, it seems.
To date the objections have come from two sides. A critical posthumanist objection (articulated in different forms by Searle and Debbie Goldgaber) exploits an analytic distinction between disruptive technical change internal to the Wide Human network and the agential independence required by DT. This is already implicit in the work on anthropologically unbounded posthumanism, where I argue that our knowledge of posthuman possibility is tenuous.
Well, the argument goes, so is our grasp of wide human possibility. Searle argues that the Wide Human network could diverge from current humanity without disconnecting from it. There could be stuff happening that is a) intrinsically alien or weird and b) does not lead to independence from WH but to a radical transformation or extension of it:
[What] real posthuman weirdness would seem to require would be something clearly identified by Roden and not dependent, to my lights, on his disruption thesis being true. The same reality that would make whatever follows humanity truly weird would be that which allowed alien intelligence to be truly weird; namely, that the kinds of cognition, logic, mathematics, science found in our current civilization, or the kinds of biology and social organization we ourselves possess to all be contingent. What that would mean in essence was that there were a multitude of ways intelligence and technological civilizations might manifest themselves of which we were only a single type, and by no means the most interesting one. Life itself might be like that with the earthly variety and its conditions just one example of what is possible, or it might not.
According to this story, posthumanity (in the sense of a weird succession to current humanity) does not presuppose disconnection. Disconnection is not necessary for posthumanity.
A more radical riposte is owned by Scott Bakker. He argues that the notion of agency that I develop in the Chapter 6 clarification of the Disconnection Conditions is a folk notion that fails to capture the radically non-agential possibilities opened up by a technological singularity. For Bakker, the singularity is the posthuman.
I think he’s right to have issues with my notion of agency. It’s a kluge designed to meet my systematic aims and requires a more detailed metaphysical exposition. For all that, I don’t think Scott has made a persuasive case for expunging agents from our ontology, yet.
In contrast to the critical posthumanists, Jon Cogburn has argued that disconnection may not be sufficient for posthumanity. There are conceivable divergences from the human implied by our current understanding of biology that are trivial and thus do not merit the concern the DT is intended to articulate. He cites the non-sapient fishlike successors of current humans depicted in Vonnegut’s novel Galapagos as examples of trivial posthuman succession. The Disconnection Thesis states that a being is posthuman iff.
- It has ceased to belong to WH (the Wide Human) as a result of technical alteration.
- Or it is a wide descendent of such a being (outside WH) (PHL 112)
The fish successors in Galapagos qualify as posthuman trivially according to Cogburn.
Their ancestors underwent mutation due to fallout of a nuclear war. Either they have ceased to belong to WH in virtue of a technical alteration in their environment or qualify as descendants of such beings. Yet they do not constitute an ontological novelty. They are no more weird than any other nonhuman life form and they do not exhibit a particularly high degree of functional autonomy.
Cogburn’s objection is elegant and immensely entertaining – do read it! For this reason alone (and because I’ve responded extensively to Bakker and Goldgaber over at philpercs) I want to focus on it in this post.
As he makes clear, the problem posed by the Galapagos example concerns an apparent ambiguity in the scope of the first condition of DT. If a “technical alteration” is construed to include any change in the world arising indirectly from human technical activity (Nuclear war in this case) then any evolutionary process it catalyzed that resulted in nonhumans with human ancestors would be a posthuman maker. But I want to argue that posthumans would have to have significant functional autonomy (or power) to escape the influence of WH, whereas no such power is implied in Galapagos-type cases. The “posthumous” fishes do not have to break out of a fish farm, for example. WH simply withers away as narrow humans develop in ways that do not suffice to maintain it.
Now, there are various responses to the Galapagos objection. Some of those involve amendments to schematic statement of DT. This has happened before.
Three years ago, Søren Holm pointed out that a similarly trivial result could be achieved if posthumans decided to produce biological humans for wide human descendants that were subsequently reabsorbed into WH. Hence the current stipulation that wide descendants of posthumans remain outside WH.
I think Pete Mandik suggests the way this should go in a Twitter response where he writes, “the solution involves distinguishing between being a technical alteration and being an effect of a T.A” Radiation from a nuclear war is an environmental change: not a technical change but an effect of one. The increased mutation rate resulting in the post-sapient fish people is not a technical change but an effect of one.
This may seem that I’m leaning on a leaky distinction between direct and indirect technical causes here. To say that the increased mutation rate is not a technical change is just to say that it is indirectly rather than directly caused by technical change. However, it could be objected that there is no principled (non-observer-relative) way of distinguishing between the direct and indirect causes in any instance. All causation is mediated by intervening causes if we but look (Experts on the metaphysics of causation might beg to differ of course).
But we can avoid having to make the distinction between direct and indirect technological causes by stipulating that the process of ceasing to be human result from the exercise of technological powers by the disconnecting.
This is not true of the post-people of Galapagos. They do not exercise the technological powers that result in the withering away of WH. They are effects of its exercise by others.
This clarification comports well with the DT and the assemblage theory in which it is framed (more, with the philosophy of technology laid out in Ch7) though it is not an explicit consequence of the schematic formulation. There might be a way of reformulating DT to allow this (along the lines of my response to Holm) but for reasons for time and incompetence, I’ll hold off on that here.
Posthumans like humans have components which instantiate technologies. It doesn’t have to follow that they are technologies, of course. I’m inclined to the view that technologies are abstract particulars concretized in disparate forms and contexts. Vonnegut’s post-people don’t instantiate or exercise such technologies. So they don’t qualify as posthumans.
There are other responses. One could just allow that Vonnegut’s post-people are posthuman but are just boring – not the kind that elicit our moral concern. However, I think the clarification suggested by Mandik provides a more robust response since it makes clear why the DT articulates our moral concern with posthuman possibility. The posthuman – according to this account – is inherently disruptive because of an independence from human ends resulting from the emergence of new technical powers. This independence implies significant functional autonomy because the technical powers exhibited by posthumans are no longer exercised by us.
Beings exhibiting this independence need not be maximally weird, but then I allow for disconnections that would involve posthumans are not radically alien in someway (e.g. genetically engineered super-cooperators, Cylons or some such). In any case, the evocation of the weird is designed to suggest the epistemic scope for divergence (given anthropological unboundedness). Nothing is weird as such or intrinsically unless we allow for the kind of radical transcendence contemplated in negative theology.
Françoise Balibar, Professor Emeritus of physics at the Université Denis Diderot, Paris VII gave a wonderful keynote on final day of the Philosophy After Nature conference in Utrecht whose title was drawn from Ernst Mach’s aphorism Die Natur ist nur einmal da (Nature is there only once).
Here she discussed the philosophical implications of failures of univocity or “complete determination” in areas such as space-time physics – points where there seems no way of uniquely individuating objects by all the properties assigned in physical theory. A key example, here, was the Einstein ‘hole argument’ which some take to imply that mathematically distinct models of the same space time built on alternate coordinate assignments are physically equivalent (or, for old-style realists about space-time, that the manifold has additional but observationally inaccessible structure). The upshot was that we can no longer view events as individuated by their relations to an independently subsisting world or subject (to observe events, you must be amid them!). It also induced the intriguing reference to Deleuze’s claim that physical science has no concept of difference.
I haven’t unpacked the implications of her talk by any means but would be delighted to discuss these themes further.
Altogether an inspiring ending to a wonderful conference characterized by some excellent keynotes and panels.
Epistemic indeterminacy concerns our representations of things rather than things. Thus the location of a mobile phone with a nokia ring tone may be represented as indeterminate between your pocket and your neighbor’s handbag. This epistemic indeterminacy is resolvable through the acquisition of new information: here, by examining the two containers. By contrast metaphysical indeterminacy – if such there be – is brute. It cannot be cleared up by further investigations.
We can thus distinguish between ? being indeterminately represented and being indeterminately ? in situations where it is possible to progressively reduce and eliminate the former indeterminacy (Roden 2010: 153).
Facts are metaphysically indeterminate if they involve indeterminate natures. The nature of a thing is indeterminate if it is impossible to determine it via some truth-generating procedure that will eliminate competing descriptions of it. Clearly, some will cavil with my use of “fact” and “nature” either because they see “facts” as ineluctably propositional or because they have nominalist quibbles about attributing any kind of nature or facticity to the non-conceptual sphere. However, like Marcus Arvan, I don’t see any conceptual affiliation as ineluctable. If the world is structured in ways that cannot be captured without remainder in propositions, it is not inappropriate to use the term “fact” to describe these structures – or so I will proceed to do here.
My favorite case of putative metaphysical indeterminacy are the two versions of the Located Events Theory of sound. LET1 (Bullot et al 2004; Casati and Dokic 2005) states that sounds are resonance events in objects; LET2 says that sounds are disturbances in a medium caused by vibrating objects (O’Callaghan 2009). According to LET1 there are sounds in vacuums so long as there are objects located in them. According to LET2 there are not. So the theories have different implications. There is also nothing to obviously favour the one over the other in the light of ordinary observations and inferences regarding sound.
As I put in in “Sonic Events” most people would probably judge that there is no sound produced when a turning fork resonates in an evacuated jar – “Yet were the air in a jar containing a vibrating tuning fork to be regularly evacuated and replenished we might perceive this as an alteration in the conditions of audition of a continuous sound, rather than the alternating presence and absence of successive sounds” ( Roden 2010: 156). You pays yer money, but it’s hard to believe that the world cares how we describe this state of affairs, or that persuasive grounds will settle the matter one bright day.
Anti-realists might say that this indeterminacy is practical rather than factive. It reflects discrepant uses of the same lexical item (“sound”) only. So (as in the case of metaphysical indeterminacy) there is no information gathering procedure that would settle the issue. But that is not because the nature of sound is indeterminate in this respect. Rather, there is no deeper (determinate or indeterminate) fact here at all.
However, this ignores the fact that LET1 and LET2 are responsive to an auditory reality that they both describe, albeit in incompatible ways. Sounds existed before there were ontologies of sound and thus have an independent reality to which LET1 and LET2 attest. If so there must be a deeper fact which accounts for the indeterminacy.
Now, either this fact is indeterminate or it is not.
If it is not, then there is some uniquely ideal account of sound: ITS. The ideal theory cannot be improved via the acquisition of further information because it already contains all the relevant information there is to be had and has no empirically equivalent competitors (there is no ITS2, etc.). ITS might or might not be an event theory – e.g. it could be a “medial theory” which represents sounds as the transmission of acoustic energy (Bullot et al. 2004). So ITS ought to replace both LET1 and LET2. We may not be aware of it, but we know that it exists somewhere in Philosophers Heaven (or the Space of Reasons).
If the fact in question is indeterminate, there is no ideal account which captures the nature of sound. Or rather, the best way to capture it is in the alternation between different accounts.
Given indeterminacy, then, there is an auditory reality which permits of description, but which cannot be completely described.
There is an interesting comparison to be made here between the indeterminacy of auditory metaphysics and the claims regarding the indeterminacy of semantic interpretation described in Davidson and others. Again, one can take indeterminacy in a deflationary anti-realist spirit – there are no semantic facts, just competing interpretations and explications recursively subject to competing interpretations ad infinitum (One popular way of glossing Derridean différance!).
Or there are semantic facts. In which case, these may be determinate or indeterminate. If there are determinate semantic facts, then the indeterminacy of radical interpretation is an artefact of our ignorance regarding semantic facts. If semantic facts are indeterminate, however, there is – again – a reality that is partially captured in competing interpretations that is never fully mirrored or reflected in them.
At this point it is interesting to consider why we might opt for factive or metaphysical indeterminacy rather than anti-realist indeterminacy. If we have reasons for believing in indeterminate facts – the ones for which there are irreducibly discrepant descriptions – this is presumably because we think there is some mind-independent reality outside our descriptions whose nature is indeterminate in some respects. If this thought is justified it is presumably not justified by any single description of the relevant domain. Nor by the underdetermination of descriptions (since this is equally consistent with anti-realism). So if we are justified in believing that there are indeterminate metaphysical facts, we must be justified by sources of non-propositional knowledge. For example, perhaps our perceptual experience of sound supports the claim that sounds occur in ways that can be captured by LET1 or LET2 without providing decisive grounds for one or the other.
This train of thought might suggest that some metaphysics bottoms out in “phenomenology” – which seems to commit the metaphysical indeterminist to the “mental eye” theory of pre-discursive concepts disparaged by Sellars and others. However, what is at issue, here, is non-propositional access to the world. One way of saying this is that such access “non-conceptual” – though this seems to presuppose that concepts (whatever they are) are components of or parasitic on propositions, and this may not be the case.
However, there is a further problem. If Scott Bakker and I are right, our grip on phenomenology is extremely tenuous (Roden 2013). So if metaphysical indeterminism is warranted, there are non-discursive reasons for believing there are metaphysically indeterminate facts. But the nature of these facts is obscure so long as our phenomenology is occluded. Now, there is no reason in principle why a subject can believe p on the basis of some evidence without being in a position to explain how the evidence supports p. This weakens their public warrant but does not vitiate it. So we may have weak grounds for metaphysical indeterminism but these are better than no grounds at all.
Bullot, Nicolas, Roberto Casati, Jérôme Dokic, and Maurizio Giri. 2004. Sounding objects. In Proceedings of Les journées du design sonore, p. 4. Paris. October 13–15.
Casati, Robert, and Dokic, Jérôme. 2005. la philosophie du son, http://jeannicod.ccsd.cnrs.fr. Accessed 3 June 2005, Chapter 3, p. 41.
O’Callaghan, Casey. 2009. Sounds and events. In Matthew Nudds & Casey O’Callaghan (eds.), Sounds and Perception: New Philosophical Essays. Oxford University Press. 26–49.
Roden, David. 2010. ‘Sonic Art and the Nature of Sonic Events’, Objects and Sound Perception, Review of Philosophy and Psychology 1(1): 141-156.
Roden, David. 2013, ‘Nature’s Dark Domain: An Argument for a Naturalized Phenomenology’, Royal Institute of Philosophy Supplement 72 (1): 169-88
Over at Agent Swarm, Terrence Blake claims that Quentin Meillassoux’s notion of correlationism is excessively narrow since it disqualifies realist positions which respond to worries about access, objectivity and truth raised by transcendental philosophers from Kant through to Husserl, and Heidegger. I’m not sure if Meillassoux’s speculative solution works and I share his worries about Harman’s OOO. But I don’t see any reason to doubt that the concept “correlationism” beautifully describes a range of contemporary anti-realist philosophies, not all of which are written in the house style of the post-Kantian European tradition ((Kant, Hegel, etc.). Hilary Putnam’s internal realism is a particularly salient example of correlationism within the pragmatist/analytic camp because it wears its Kantian heart on its sleeve.
Internal Realism is a philosophical oxymoron since it denies that there are things whose existence and nature is independent of human descriptive practices. The fact that Putnam expresses his variant of transcendental philosophy in the post-Wittgensteinian argot of linguistic practices and language-games rather than transcendental subjects or Daseins is largely irrelevant since the roles that language and subjectivity play in correlationist philosophies are, to put it bluntly, correlative (Perhaps, as Frank Farrell argues, “language” and subjectivity” are a hangover from the Nominalist God whose omnipotence extended to determining differences and similarities within an unstructured universe – See Farrell 1996). Meillassoux does not address analytic correlationism in After Finitude but his formulation of correlationism seems to apply to post-Wittgensteinian position for which language and practice assumes the mantle of the transcendental subject:
In the Kantian framework, a statement’s conformity to the object can no longer be defined in terms of a representation’s ‘adequation’ or ‘resemblance’ to an object supposedly subsisting ‘in itself, since this ‘in itself is inaccessible. The difference between an objective representation (such as ‘the sun heats the stone’) and a ‘merely subjective’ representation (such as ‘the room seems warm to me’) is therefore a function of the difference between two types of subjective representation: those that can be universalized, and are thus by right capable of being experienced by everyone, and hence ‘scientific’, and those that cannot be universalized, and hence cannot belong to scientific discourse. From this point on, intersubjectivity, the consensus of a community, supplants the adequation between the representations of a solitary subject and the thing itself as the veritable criterion of objectivity, and of scientific objectivity more particularly. Scientific truth is no longer what conforms to an in itself supposedly indifferent to the way in which it is given to the subject, but rather what is susceptible of being given as shared by a scientific community.
Such considerations reveal the extent to which the central notion of modern philosophy since Kant seems to be that of correlation. By ‘correlation’ we mean the idea according to which we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other. We will henceforth call correlationism any current of thought which maintains the unsurpassable character of the correlation so defined (Meillassoux 2006, 4-5).
Putnam is a modern Kantian because he regards ontology as internal to languages or conceptual schemes (though, for Putnam, unlike Kant, these categorical frameworks are historically contingent). There are no ontological facts that obtain independently of some fixation of language. Such facts would require the existence of a One True Theory of reality which, he claims, is precluded on model theoretic grounds:
The suggestion I am making , in short, is that a statement is true of a situation just in case it would be correct to use the words of which the statement consists in that way in describing the situation. Provided the concepts in question are not themselves ones which we ought to reject for one reason or another, we can explain what ” correct to use the words of which the statement consists in that way ” means by saying that it means nothing more nor less than that a sufficiently well placed speaker who used the words in that way would be fully warranted in counting the statement as true of that situation (Putnam 1987, 115).
As a number of commentators have argued the semantic considerations that motivate Putnam’s shift from realism to internal realism are precisely the one’s that motivated Kant to develop a non-representational account of concepts (See Moran 2000). While Putnam is exemplary, similar considerations apply to Dummett-style anti-realism. Davidson is a harder case because, unlike Putnam, Davidson rejects epistemic accounts of truth (Davidson 1990, 307-9). However, Davidson thinks that what Tarski leaves out when he shows us how to determine the extension of the truth predicate relative to an object language L is a presupposition of our intersubjective practices of interpretation. Thus, as Jeff Malpas argues, Davidson is probably some kind of “horizontal realist” for whom the world must be understood as the open phenomenological background against which interpretative practices operate – thus looping us back to transcendental subjectivity in its most developed, subtle but still humanist formulation. Horizontal realism is still realism with something missing. It is not relativism, strictly speaking, but the “world” that it presupposes is more like Husserl’s pre-theoretically given Lebenswelt than Meillassoux’s great outdoors (Malpas 1991)
Davidson, Donald (1990). The structure and content of truth. Journal of Philosophy 87 (6):279-328.
Farrell, Frank (1996). Subjectivity, Realism and Postmodernism: The Recovery of the World in Recent Philosophy ( Cambridge University Press).
Malpas, J.E. (1992) Donald Davidson and the Mirror of Meaning. Cambridge: Cambridge University Press.
Moran, Dermot (2000). “Hilary Putnam and Immanuel Kant: Two `internal realists’?” Synthese 123 (1):65-104.
Meillassoux, Q. (2006) After Finitude: An Essay on the Necessity of Contingency, Ray Brassier (trans.). New York: Continuum.
Putnam, Hilary (1987). Representation and Reality. MIT Press.
Over at Larval Subjects Levi has posted a ringing endorsement of naturalism and “materialism” designed to provoke a few readers within the Continental Philosophy/Theory community. The upshot of the post, as I read it, is that we live in a causally closed material world described by natural sciences. Interactions between entities described at different scales by physics, chemistry, biology and astronomy are the only sources of order and agency. Nothing happens in the world other than as the effect of an antecedent physical state. Secondly, Levi claims, that the anti-naturalism expressed in the humanities via transcendental phenomenology, transcendental pragmatics, poststructuralist textualism, etc. are all attempts to repress the traumatic wound that belief in materialism and causal closure delivers to human exceptionalism. I quote:
In Freudian terms, these are so many responses to the narcisstic wound of nature and materiality. It is not the subject, lived experience, history, intentionality, the signifier, text, or power that explains nature, . . ., it is nature and materiality that explains all of these things. If these things aren’t treated as natural phenomena, then they deserve to be committed to flames. The point is not that these other orientations have failed to make contributions to our understanding of the natural world, but that they have mistakenly treated these things as grounds of the natural world, rather than the reverse.
Some might demur from the psychoanalytic framing (does psychoanalysis have the empirical support that a naturalist expects from a source of ontological insight? Should one care?) but the sentiments are sound and philosophically energizing. If we admit materialism and causal closure then we need a decent theory of how the topics of the humanities fit into this world. If materialism is false or ill-defined, this needs to be demonstrated. The problem with a lot of recent continental philosophy is not that it is anti-naturalistic (Some of my best friends are anti-naturalists and we’re still talking) but that anti-naturalism has been a default attitude rather than a worked through position. This hauteur was perfectly exemplified by Simon Critchely at a conference some years back where he remarked that he didn’t care how consciousness was made by the brain since such an explanation could be of no relevance to phenomenology.
Maybe Critchely was right and still is; but it’s not obvious that you can insulate phenomenological description from its ontological basis in this way. There’s a problem to be tackled here, whether one is a student of Dennett or of Derrida. Such metaphysical indolence should be unacceptable within any school of contemporary philosophy.
In “The Trace of Time and the Death of Life: Bergson, Heidegger, Derrida” Martin Hägglund gives a brilliantly clear exposition of Derrida’s trace as a relationship that undermines both the continuity and punctate discreteness of time and poses an “arche-materiality” of time against a vitalistic/continuist conception of temporality.
The trace-structure is the minimal form of any temporality – an inextricable relation to a past that has never been present. Derrida might, on first reading, appear to endorse something like a vitalist or continuist conception of time. He accepts that temporality requires the displacement of temporal event from itself: a series of absolutely independent nows would not be a temporal series, any more than an unrepeatable sign could signify anything.
However, it is not merely the time of consciousness or life: of memory and habit, say. According to Derrida, this displacement is always “inscribed” in some material-spatial medium. E.g. Freud’s purely neurological trace consists of difference in the conduciveness of neural pathways to stimulation – a primary basis for memory which is always repeated differently (iterated) as a result of the causal action on neural tissue of subsequent stimuli.
The synthesis of time cannot be appropriated without spatial support by an immaterial life or subjectivity, or Dasein, etc.Haggelund concludes that this implies an asymmetric dependence of life on matter. The living depends on the non-living but is contingent product of a physical nature characterized by an arche-material temporality. Life, consciousness etc. depends on the material existence of the trace but not vice versa. The trace is (somehow) built into physical reality but it is equally implicit in inorganic or mechanical existence. The zombie-like repetition of the trace is as implicated in the most vivid conscious experience as it is in the evolution of material inorganic structures.
Here are the proceedings – including abstracts and podcasts – of what seems to have been a fascinating conference on information in philosophy, science and the humanities at my own institution, The Open University.
I just happened on this excellent post over at Speculative Heresy by Nick Srnicek entitled ‘Being No One: Metzinger and Kant’ three years down the line.
Being painfully slow in these things, I’m only halfwway through BNO myself, but I agree with the other posts that it provides an excellent and very helpful precis.
I don’t agree, though, with Srnicek’s contention Metzinger should be required to include an account of the nature of reality within his model of consciousness. His world-zero postulate gives a functional account of the property: simulating the world ‘for-me’. This obviously doesn’t provide a metaphysical account of what it is to be real as such. If it purports to be true and not merely instrumentally adequate, it presupposes one. This is to say that the truth of Metzinger’s position requires that the models it employs – state space semantics, say – are approximately true or representationally adequate.
Metzinger’s aberration, if you can call it that, is merely to assume Scientific Realism. One could urge that Scientific Realism needs to be backed up by a metaphysical account – transcendental realism, say – but it’s not clear why Metzinger’s theory of consciousness should be required to provide it
Some theories of consciousness or subjectivity do attempt to modalize transcendent reality in terms of its relation or lack of it to something immanent. For example, transcendental phenomenology gives an account of reality in terms of the transcendence of objects . This account only works if phenomenology provides an epistemically privileged yardstick of non-transcendence.
But Metzinger’s autoepistemic closure principle entails that introspection and intuition have no epistemic privileges – they are on a par with any other method of gauging the state of some bit of reality. My epistemic relationship to my phenomenology is as mediated as my relationship to quasars or the heart of the sun (autoepistemic closure does not entail cognitive closure – the self is also no more ‘noumenal’ or ‘transcendent’ than anything else).
Metzinger’s position implies that phenomenology is a legitimate undertaking but not one with any more relevance to metaphysics than geology. For sure, if we are realists we need a realist metaphysics of some kind, but it’s far from clear why we should expect Metzinger’s naturalistic theory of consciousness to provide it.