Stephen Shakespeare has an interesting post over at An und für sich discussing Hilary Putnam’s argument against Metaphysical Realism and the positions of contemporary speculative realists like Meillassoux and Harman. Putnam (circa Reason Truth and History) treats Metaphysical Realism (MR) a package deal with three components: Independence (there is a fixed totality of mind-independent objects); Correspondence (there are word-world relations between bits of theories and the things to which they refer); Uniqueness (there is one true theory that correctly describes the state of these objects).
He then uses his model-theoretic argument to undermine Uniqueness. Given an epistemologically ideal theory and an interpretation function which maps that theory onto one of some totality of possible worlds, you can always come up with another mapping and hence another theory that is equally true of that world, elegant, simple, well-confirmed, etc. Unless, there is some other property that picks out a single theory as God’s Own other than its epistemic and semantic virtues, Uniqueness fails and with it MR.
Shakespeare argues that speculative realists reject the form of the independence thesis, denying that there is a fixed totality of mind-independent objects:
[Contemporary Realism] need not entail a conviction that objects in the world are a ‘fixed totality’. Objects can change or join to form new, irreducibly real objects. The lists of objects which are part of the rhetorical style of OOO encompass radically diverse things, including physical assemblages, social groups and fictional works. Each of these ‘objects’ consists of other irreducible objects and so on. There is not simply one stratum of object.
For Meillassoux, the picture is different. In one respect, the absolute consists of the fact that anything can be different for no reason: there is no founding ontological or transcendental necessity for the order of things. And this is what we can know. So his realism also does not entail that there is one fixed totality, or one complete and true description of things.
I demur partly from this analysis of where SR diverges from MR – though I’m happy to be persuaded otherwise. By “fixed” Putnam just means determinate. If there are fictional objects or sensa, then these must be part of God’s Own Theory (given MR). If there are assemblages with emergent properties, they too might click into God’s Own Ontology. Moreover, the Harmiverse has to consist of discrete, encodable objects, so it’s quite susceptible to a model-theoretic analysis of the kind that Putnam offers (See my Harman on Patterns and Harms).
Shakespeare may be right about Meillassoux’s ontology. One could argue that hyperchaos is not a thing and thus cannot be part of a model.
If we read Hyperchaos as the absolute contingency of any thinkable possibility then representing hyperchaos might seem pretty easy. Meillassoux is just saying that any non-contradictory event could occur (I will not consider whether he is justified in saying this).
So perhaps his ontology just comes down to the claim that any arbitrary, non-contradictory sentence is true in at least one possible world.
I suspect (but cannot show) that the real problem with reconciling Meillassoux’s SR with MR is in how one interprets this modality. Saying that any arbitrary, non-contradictory sentence is true in at least possible world, is not what Meillassoux has in mind since this resembles a standard definition of de dicto contingency in possible world semantics. Moreover, Meillassoux (2010) denies we have warrant to believe that the thinkable can be totalized a priori on the grounds that set theory shows that there are always more things than can be contained in any totality. If this is right, then it is precipitate to assume a totality of all objects or a totality of all models under which God’s Own Theory could be interpreted. MR cannot even get started.
However, there are other ways in which contemporary realists (and not just speculative realists) could diverge from MR. For example, Devitt denies that realism is really committed to Uniqueness – the view that there is exactly “one true and complete description of the world” (Devitt 1984: 229). We might also demur from the assumption that the world consists of objects or only objects that enter into semantic relationships with bits of language or mind. Structural realists, for example, argue that reality is structure and that this is precisely what approximately similar theories capture – regardless of their official ontological divergences (Ladyman and Ross 2007: 94-5). Some speculative ontologies deny the Correspondence assumption, holding that the world contains entities that cannot be fully represented in any theory: e.g. powers, Deleuzean intensities.
Perhaps the Correspondence assumption just replicates the Kantian view that entities must conform to our modes of representation – in which case a robust realist should reject it in any case. This, interestingly, is where the issue of realism segues into the issues addressed in my forthcoming book Posthuman Life. For, analogously to Meillassoux’s claim about totalizing the thinkable, one can also reject the claim that we have any advance, future-proof knowledge of the forms in which reality must be “thought” If we have no access to the space of possible minds, then we can have no a priori conception of what a world must be as such.
Devitt, M. 1984. Realism and Truth. Princeton: Princeton University Press.
Meillassoux, Q. 2010. After Finitude: An Essay on the Necessity of Contingency, R. Brassier (trans). London: Continuum.
Putnam, H 1981. Reason, Truth and History, Cambridge University Press, Cambridge.
Ladyman James, Ross Don, (2007), Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press.
There’s an epic flame war over at Three Pound Brain in response to Scott Bakker’s discussion of Levi Bryant’s Object Oriented Ontology. I’m sitting this one out like my hero Custard the Cat. In part because, I’m just too busy and in part cos’ I don’t want to distract Scott from the trudge to Golgotterath and the moral necessity of euthanizing our immortal souls.
Metaphysical Realism (MR) is not one claim but, Putnam argues, a package of interrelated claims about the mind-world relationship. The key components of MR are 1) the independence thesis; 2) the correspondence thesis; 3) the uniqueness thesis. The independence thesis states that there is a fixed totality of mind independent objects (the world). The correspondence thesis states that there are determinate reference relations between bits of language or mental representations and the bits of the world to which they refer. The uniqueness theory states that there is a theory whose sentences correctly describe the states of all these objects. This implies a singular correspondence between the terms belonging to this theory and the objects and properties that they refer to (Putnam 1981, 49). As a package it is cohesive. One needs mind-independent properties and objects as objects/properties to correspond to. There must be some unique total fact about these objects if there is to be one correct way in which a theory can represent this total fact.
We can imagine this theory being expressed in a language consisting of names like “Fido” and “Shlomo”, property and relation terms like “…is a dog”, “…is a cat” or “…is father of…”, as well as all the quantificational apparatus that we need to make multiple generalizations: e.g. “There is at least one thing that is a cat” or “All dogs hate at least one cat”. Of course, since this is the one true theory we might expect it to contain enough mathematics (e.g. set theory) to express the true laws of physics, the true laws of chemistry, etc. However, for this to be one true theory each true sentence that we can derive from it – e.g. “Shlomo is a cat” – must hook up with the world in the right way. For example, “Shlomo” must determinately refer to a unique object and this object must have the property referred to by “…is a cat” (this property might be the set of all cats or it might be universal property of catness – again, depending on the metaphysical facts). [i]
An assignment of referents to terms along these lines is called an interpretation function. The set of objects, properties, relations, etc. that are matched up to terms by a particular interpretation function is called a model. Putnam’s account of metaphysical realism then, in effect says that metaphysical realism is the claim that there is a unique description of the world hooked up to that world by a single true interpretation function (matching names to objects, property terms to properties, etc.).
The uniqueness of the corresponding interpretation function is crucial here because if there were more than one good way of interpreting the terms of the one true theory, there would be alternative theories, each one corresponding to a different interpretation function for the constituent terms of its language.[ii] In that case, there would not be one correct description of the world. But if realism comes down to a commitment to there being a God’s eye view of the world – a uniquely true theory which picks out the way the world is – then realism would have to be rejected.
What is the virtue that makes the one true theory unique? Well, to count as the one true theory, it would, at minimum, need to satisfy all the “operational constraints” that ideally rational inquirers would impose on such a theory. For example, if one imagines science progressing to an ideal limit at which no improvements can be made in its explanatory power, coherence, elegance or simplicity, then the one true theory would have to be as acceptable to ideally rational enquirers as that theory (Putnam 1981, 30).
Putnam’s argument against realism is that given a theory that satisfies this ideal of operational virtue there would always be a second equally good theory that can be constructed by giving the sentences of the first different interpretations. Further, he argues, that there is nothing beyond operational virtue that might distinguish the first theory from the second because there are no mind-independent semantic facts that specify the right interpretation. If this is right, then there cannot be a one true theory that completely describes the world.
The argument begins with a theorem of model theory.[iii] The model-theoretic notion of a theory is that it is a language L under a given interpretation function I which maps the terms of L onto a universe of objects and properties (properties are treated as sets of objects. For example, the relation of fatherhood would be the set of all ordered pairs, the second member of which is the son of the first member.). The theorem states that for every theory T1 (consisting of a language L under interpretation I) it is possible to gerrymander a function J that interprets each term L “in violently different ways, each of them compatible with the requirement that the truth value of each sentence in each possible world be the one specified” (Putnam 1981, 33, 217-218). The basic idea is that under these “permutated” interpretation functions, the sentences that come out true in T1 in a given possible world would come out true in T2 in that world.[iv] The two theories T1 and T2 would not differ in assignments of truth values to sentences in any possible world and – being expressed in the same words – would have exactly the same structure, so each would be as simple and as elegant as the other.
However, metaphysical realism is committed to the view that even an ideally confirmed and simple theory could be comprehensively false because truth is “radically non-epistemic” – that is truth is a matter of whether a sentence corresponds with the world, not of how well confirmed that sentence is. This is, of course, the position that Descartes is committed to in his Evil Demon thought experiment. The semantic facts that give my beliefs reference to a possible world are unaffected by the existence or nature of the mind-external world. Putnam’s version of this realist conceit is the science fictional notion that we might be brains in vats being fed simulated experiences by a mad neurophysiologist. Thus, according to metaphysical realism, even a theory T1 that is operationally ideal and irrefutable for vat brains could be still be false (Putnam 1978, 125). However, unlike Descartes, Putnam argues that this conceit is incoherent. If T1 is consistent it is possible to find an interpretation function that maps the language of T1 onto a model containing elements of whatever world happens to exist – even if that is vat-world. So under this interpretation T1 comes out true, not false (Putnam 1978, 126).
It can be objected that this would not be the interpretation “intended” by the vat brains (or the ensorcelled Descartes, if one prefers). But T1 would be operationally as good as it gets for the envatted. It would inform their practices of inference and prediction in just the same way that it would were it true. There seems to be nothing beyond these practices of judgment and inference that could fix the meaning of terms like “cat” or “dog” – though these are clearly not sufficient to give uniquely determinate meaning.
Some philosophers have argued that uniquely intended interpretations can be imposed by our contents of our beliefs or ideas. For example, maybe my idea of a cat and actual cats shares a mysterious essence of catness which “exists both in the thing and (minus the latter) in our minds” which, in turn, fixes the reference of property terms like “cat” (Putnam 1983, 206; 1981, 59-61). Putnam argues that this response makes recourse to a magic language of self-interpreting mental-signs: it states, in effect, that there are mental representations that just mean what they mean irrespective of how the world is or of their role in inference. Here Putnam is in agreement with the French deconstructionist, Jacques Derrida. For Derrida, as for Putnam, a sign is a mark that acquires it meaning by being used differently from other signs, whether the mark is spoken, written or occurs in the brain or in some purely mental medium (if such a thing exits). A particular inscription or brain state or sound only counts as a sign insofar as it functions or is used differently from other signs. The obvious candidate for “use” and “function” here are the roles of signs in inferences and in interpretative practices. But these, as has been seen, are unable to fix a unique model for T1.
So it does not matter whether we are talking about mental signs or signs in language: they derive meaning from their differential functioning. For Derrida this has the complicating consequence that any mark must be “iterable”: i.e. can be lifted from its standard contexts and grafted into new ones, thereby acquiring different functions (Derrida 1988, 9-10). However, for our purposes, the important consequence is that appealing to “inner” or mental signs to fix the intended meanings of T1 seems to presents us with exactly the same problem of indeterminacy as we had with T1 itself (Putnam 1978, 127; 1983, 207).
If this is right, then the realist claim that an ideally confirmed theory could be false just comes down to the claim that there are self-standing minds or self-standing languages whose meanings are fixed regardless of how things lie in the world. But if Putnam is right, there are no self-standing meanings in this sense. Descartes thought experiment in either its 17th Century Demonic version or its modern Neuro or Simulationist versions is incoherent.
But, Putnam argues, this means that the idea that truth is non-epistemic is incoherent. To suppose that our beliefs could all be false, no matter how well they conform to experience and canons of enquiry makes no sense (Putnam 1978, 128-130). And (assuming the soundness of Putnam’s model theoretic argument) this also means that the idea of a privileged, God’s eye view of the world – MR -is incoherent. There is no single theory that uniquely corresponds to the nature of a mind-independent world because there are always other interpretation functions with which to generate new theories with the same degree of epistemic virtue. Thus the assumption that the world has an intrinsic nature independently of how it is construed from the standpoint of a particular theory or form of life is as much an ungrounded superstition as the notion of substantial forms.
Rather than aspiring to the idealized God’s eye view of metaphysical realism, Putnam argues that we should recognize that truth, reference and objectivity are properties that our claims and experiences have in virtue of “our” practices of inference, confirmation and observation. To say that the sentence “’Cow’ refers to cows” is true is not to make a claim about some determinate relationship – reference – between word and world but to say something about the situations in which a competent speaker of English should use the term ‘cow’ (Putnam 1978, 128, 136). From within the shared practices of English speaker, this fact just shows up as an a priori truth. But this (as Kant also claimed) does not reflect some impossible insight into the mind-independent nature of things, but simply reflects our acculturated understanding of what is appropriate to say, when (137). Even the metaphysical structure of the world is – according to this view – a perspective that reflects the background understanding and interests of creatures who share the relevant concerns and practices. Reference is, as Putnam puts it elsewhere, a “matter of interpretation” which presupposes “a sophisticated understanding of the way words are used by the community whose words one is interpreting” (Putnam 1995, 119). So, by the same token, there can be no ready-made totality of objects of reference since (again) this presupposes the discredited God’s eye view:
[From] my “internal realist” perspective at least, there is no such totality as All the Objects There are, inside or outside science. “Object” itself has many uses, and as we creatively invent new uses of words, we find that we can speak of “objects that were not “values of any variable” in any language we previous spoke (The invention of “set theory” by Cantor is a good example of this.) (Putnam 1995, 120)
Derrida, Jacques (1988). Limited Inc. Samuel Weber and Jeffrey Mehlman (trans.),Evanston Ill.:
Northwestern University Press.
Putnam, Hilary (1978). Meaning and the Moral Sciences. Routledge & K. Paul.
Putnam, Hilary (1981). Reason, Truth, and History. Cambridge University Press.
Putnam, Hilary (1983). Realism and Reason: Philosophical Papers Volume 3. Cambridge University Press.
[i] We can summarise this state of affairs as follows:
“Fido” —> the object Fido
“Shlomo” —> the object Shlomo
“…is a cat…” —> property of cattiness
“…is a dog…” —> property of dogginess
“…is the father of…” —> relation of fatherhood
[ii] For example, we can imagine a deviant interpretation function that maps up terms in the “wrong” way:
“Fido” —> the object Fido’s shadow
“Shlomo” —> the object Shlomo’s shadow
“…is a cat…” —> property of being the shadow of a cat
“…is a dog…” —> property of being the shadow of a dog
“…is the father of…” —> relation of fatherhood
[iii] The branch of mathematical logic that examines the formal relationships between languages and the models assigned to them under interpretation functions.
[iv] Suppose T1 has an interpretation function I that includes the first set of assignments given above (“Fido” refers to Fido, “Shlomo” refers to Shlomo, etc.) whereas T2’s interpretation function has the second. Thus the sentence “Shlomo is a cat” says that the object Shlomo is a cat in T1 whereas the same sentence say that a particular shadow is the shadow of a cat, which also happens to be true.
You can hear my recent Anthropotech talk: Beyond Enhancement: Anthropologically Bounded Posthumanism at the Anthropotech Multimedia website here
The PowerPoint presentation is below
A number of writers in the Speculative Realist blogosphere have cited Ray Brassier’s discussion of Paul Churchland’s attempt to reconcile scientific realism and a Prototype Vector Activation (PVA) theory of content in Chapter 1 of Nihil Unbound (Brassier 2007). Though I am reasonably familiar with the work of Paul and Patricia Churchland, I recall finding the argument in this section tough to disentangle first and second time round. But enough people out there seem convinced by Ray’s position to warrant another look.
This is my first attempt at a reconstruction and evaluation of Ray’s position in Nihil (it does not yet take account of any subsequent changes in his position – I suspect that others will be better placed than me to incorporate these into the discussion). In what follows I’ll briefly summarize the PVA theory in the form familiar to Ray at the time of Nihil’s publication. The second section will then attempt to reconstruct his critique of Churchland’s attempt to reconcile his theory of content with a properly realist epistemology.
1. The Prototype Activation Theory of Content
Firstly, what is the PVA theory of content? As many will already be aware, the term comes from the argot of neural network modeling. Artificial Neural Networks (ANN’s) are a technique for modeling the behaviour of biological nervous systems using software representations of neurons and their interconnections. Like actual neurons, the software neurons in ANN’s respond to summed inputs from other neurons or from the ‘world’ by producing an output. Many ANN’s consist of three layers: an input layer which is given some initial data, a hidden layer that transforms it, and an output layer which presents the network’s ‘response’ to the input.
Learning in Neural Nets usually involves reducing the error between the actual output of the network (initialized randomly) and the desired output, which might well be the allocation of an input pattern to some category like ‘true’ or ‘false’, ‘male’ or ‘female’, ‘combatant’ or ‘non-combatant’ or ‘affiliation unknown’, represented by activations values at the output.
One of the key properties adjusted during the training of ANN’s are the ‘weights’ or connection strengths between neurons since these determine whether a given input generates random noise (always the case prior to training) or useful output. In ANN’s there are supervised learning algorithms that tweak the NN’s weights until the error between the actual output and that desired by the trainers is minimized. Some ANN’s (for example, Kohonen Self-Organizing Feature Maps) use more biologically plausible unsupervised learning algorithms to generate useful output such as pattern identification, without that pattern having to be pre-identified by a trainer. One example is the “Hebb Rule” which adjusts connection weights according to the timing of neuron activations (neurons that fire together, wire together). So ANN’s don’t have to be spoon-fed. They can latch onto real structure in a data set for themselves.
Learning in ANN’s, then, can be thought of as a matter of rolling down the network’s “error surface” – a curve graphing the relationship of error to weights – to an adequate minimum error. An error surface represents the numerical difference between desired and actual output, against relevant variables like the interneuron weights generating the output.
Categories acquired through training are represented as prototype regions within the “activation space” (the space of all possible activation values of its neurons) of the network where the activations representing the items falling under a corresponding category are clustered. For Churchland, prototypes represent a structure-preserving mapping or “homomorphism” from uncategorized input onto conceptual neighborhoods within the n-dimensional spaces of neural layers downstream from the input layer (Churchland 2012, viii, 77, 81, 103, 105). In effect, the neural network learns concepts by “squishing” families of points or trajectories in a high-dimensional input space onto points or trajectories clustered in a lower-dimensional similarity space. Two trained up neural nets, then, can be thought of as having acquired similar concepts if the prototypes in the first net form the vertices of a similar geometrical hypersolid to those in the second net. The Euclidean distances between the prototypes do not need to have the same magnitude but they need to be proportionate between corresponding or nearly -corresponding points. It’s important for Churchland that the distance-similarity metric is insensitive to dimensionality, for this, he argues, allows conceptual similarity to be measured across networks that have different connectivities and numbers of neurons (Churchland 1998).
The resultant theory of content is geometrical rather than propositional and, according to Churchland, internalist rather than externalist; it is also holist rather than atomist. It is geometric insofar as conceptual similarity is a matter of structural conformity between sculpted activation spaces. Such representations can capture propositional structure, but need not represent it propositionally. For one thing, the stored information in the inter-neural weights of the network need not exhibit the modularity that we would expect if that information were stored in discrete sentences. In most neural net architectures all the inter-neural weightings of the trained up network are involved in generating its discrepant outputs (Ramsey, Stich and Garon 1991).
Churchland’s internalism is a little more equivocal, arguably, than his anti-sententialism. The account is internalist insofar as it is syntactic, where the relevant syntactic elements are held to reside inside our skulls. Information about the real world features or structures tracked by prototypes plays no role in measures of conceptual similarity at all. Theoretically, conceptually identical prototypes could track entirely disparate environmental features so long as they exhibited the relevant structural conformity. Thus conceptual content for Churchland is a species of narrow content. However, Churchland regards conceptual narrow content as but one component of the “full” semantic content in PVA. The other components are the abstract but physically embodied universals tracked by sculpted activation spaces:
A point in activation space acquires a specific semantic content not as a function of its position relative to the constituting axes of that space, but rather as a function of (1) its spatial position relative to all the other contentful points within that space; and (2) its causal relations to stable and objective macro-features of the external environment (Churchland 1998, 8).
Fans of active-externalist or embodied models of cognition might argue that this syntactic viewpoint on conceptual similarity might need to be subsumed within a wide process externalist conception to allow for cases in ethology and robotics where the online prowess of a neural representation depends on the presence of enabling factors in an organism or robot’s environment (Wheeler 2005). However, I will not consider this possibility further since it is not directly relevant to Brassier’s discussion.
2. Brassier’s Critique
Brassier argues for two important claims. The first, B1, concerns the capacity of Churchland’s naturalism to express the epistemic norms that might distinguish between competing theories – most relevantly, here, different theories of mental content or processing such as PVA, on the one hand, or folk psychology (FP), on the other.
Brassier claims that Churchland’s attempt to express superempirical criteria for theoretical virtue – “ontological simplicity, conceptual coherence and explanatory power” (Brassier 2007, 18) – in neurocomputational terms leaves his account vacillating between competing theories or ontologies. This is because his revisionary account of the superempirical virtues is either 1) essentially pragmatic, concerned only with functional effectiveness of organisms who instantiate these prototype frameworks in their nervous systems, or 2) a metaphysical account whose claims go beyond mere pragmatic efficacy.
The second, B2, is the more programmatic and general. B2 is the claim that naturalism and empiricism are each unable to provide a normative foundation for the scientific destruction of the “manifest image”. B1 supports B2, according to Brassier, because Churchland – who Brassier regards as one of the most brilliant, radical and revisionary of naturalist metaphysicians – is unable to support his vaulting ontological ambitions without sacrificing his pragmatic scruples. Brassier thus sees Churchland’s philosophy as “symptomatic of a wider problem concerning the way in which philosophical naturalism frames its own relation to science”.
Much of Brassier’s argument in section 1.6 of Nihil Unbound – “From the Superempirical to the Metaphysical” centers on a relatively short text by Churchland on Bas van Fraassen’s constructive empiricism (Churchland 1985). According to Brassier, Churchland uses this text to propose replacing the “normative aegis of truth-as-correspondence” with “‘superempirical’ virtues of ontological simplicity, conceptual coherence, and explanatory power.” (Brassier 2007,18).
In the context of our familiar folk-distinction between epistemic criteria for belief-selection and semantic relationships between beliefs and things, Brassier’s gloss might seem to confuse epistemology and semantics. Superempirical truth is a putative aim of scientific enquiry not a criterion by which we may independently estimate its success (albeit an aim that is question both by Churchland and van Fraassen). This also seems to be Churchland’s position in the van Fraassen essay. The superempirical virtues are, he writes, “some of the brain’s criteria for recognizing information, for distinguishing information from noise.” (Churchland 1985; Brassier 2007, 23)
Churchland’s claim in context is not that these are better criteria for theory choice than truth but that they are preferable to the goal of empirical adequacy favoured by van Fraassen’s constructive empiricism, since the latter is committed to an ultimately unprincipled distinction between modal claims about observables and unobservables. From this we might infer that the superempirical virtues are not alternatives to truth but ways of estimating either truth or the relevant alternatives to truth that could be adopted by post sententialist realisms.
Churchland questions the status of scientific truth not (as in van Fraassen) to restrict sentential truth claims to correlations with their “empirical sub-structures” but because truth is a property of sentences or a property of what sentences express (propositions or statements) and he questions whether sentences are the basic elements of cognitive significance in human and non-human cognizers.
If we are to reconsider truth as the aim or product of cognitive activity, I think we must reconsider its applicability across the board, and not just in some arbitrarily or idiosyncratically segregated domain of ‘unobservables.’ That is, if we are to move away from the more naive formulations of scientific realism, we should move in the direction of pragmatism rather than in the direction of positivistic instrumentalism (Churchland 1985, 45).
Churchland’s claim that sentential or linguaformal representations are not basic to animal cognition is supported by the two claims: 1) that natural selection favours neural constructions attuned to the dynamical organization of adaptive behaviour and 2) that this role is not best understood in sententialist terms.
When we consider the great variety of cognitively active creatures on this planet – sea slugs and octopi, bats, dolphins and humans – and when we consider the ceaseless reconfiguration in which their brains or central ganglia engage – adjustments in the response potentials of single neurons made in the microsecond range, changes in the response characteristics of large systems of neurons made in the seconds-to-hours range, dendritic growth and new synaptic connections and the selective atrophy of old connections effected in the day-upwards range – then van Fraassen’s term “construction” begins to seem highly appropriate. . . . Natural selection does not care whether a brain has or tends towards true beliefs, so long as the organism reliably exhibits reproductively advantageous behaviour. Plainly there is going to be some connection between the faithfulness of the brain’s ‘world model’ and the propriety of the organism’s behaviour, but just as plainly the connection is not going to be direct.
When we are considering cognitive activity in biological terms and in all branches of the phylogenetic tree, we should note that it is far from obvious that sentences and propositions or anything remotely like them constitute the basic elements of cognition in creatures generally. Indeed . . . it is highly unlikely that the sentential kinematics embraced by folk psychology and orthodox epistemology represents or captures the basic elements of cognition and learning even in humans . . . If we are ever to understand the dynamics of cognitive activity, therefore, we may have to reconceive our basic unit of cognition as something other than the sentence or the proposition, and reconceive its virtue as something other than truth (Churchland 1985, 45-6). .
There is nary a mention of concepts derived from theories of neurocomputation in the 1985 text but it is pretty easy to see that the PVA model is at least a candidate for Churchland’s notional alternative to the semantics, epistemology and psychology of folk. Prototypes points or trajectories are cases of dynamical entities called attractors. An attractor is a limit towards with orbits within a region of a phase space tend as some function (an iterative map or differential equation) is applied to them. When a neural network is trained up orbits whose vectors include a large variety of input states will evolve towards some preferred prototypical point – that is just how the network extracts categories from complex data sets. This allows trained up networks to engage in a process that Churchland calls ‘vector completion’: embodying expectations about the organization and category of the input data set which may tend towards a correct assay even when that data set is degraded somehow (Churchland 2007, 102). Since attractors also reflect a flexible, dynamical response to varying input, they are also potential controllers for an organism’s behaviour – with vector completion offering the benefits of graceful degradation in a noisy, glitch-ridden world.
This suggests a potential cognitive and cybernetic advantage over sententialist models. Humans and higher nonhuman animals regularly make skillful, and occasionally very fast, abductive inferences about the state of their world. For example,
- Smoke is coming out of the kitchen – the toast is burning!
- There are voices coming from the empty basement – the DVD has come off pause!
- Artificial selection of horses, pigeons, pigs, etc. can produce new varieties of creature – Evolution is natural selection!
But is our capacity for fast and fairly reliable abduction consistent with the claim that beliefs are “sentences in the head” or functionally independent representations some other kind. Jerry Fodor, for one, concedes that this makes abduction hard to explain because it requires our brains to put a “frame” around the representations relevant to making the abduction – information about the Highway Code or the diameter of the Sun probably won’t be relevant to figuring out that burning toast is causing the smoke in the Kitchen. But within the FP framework, relevance is a holistic property beliefs have in virtue of their relations to lots of other beliefs. But which ones? How do our brains know where to affix the frame in any given situation without first making a costly, unbounded search through all our beliefs, inspecting each for its relevance to the problem?
Churchland thinks that the PVA model can obviate the need for epistemically unbounded search because the holistic and parallel character of neural representation means that all the information stored in a given network is active in the relaxation to a specific prototype (Churchland 2012, 68-9). It’s possible that Churchland is being massively over-optimistic here. For example, can PVA theory convincingly account for the kind of analogical reasoning that is being employed in case of Darwin’s inference to the best explanation? Churchland thinks it can. He argues reasonably that prototype frameworks are the kind of capacious cognitive structure that can be routinely redeployed from the narrow domain in which they are acquired, so as to reorganizes some new cognitive domain. The details of this account are a thin as things stand, but the basic idea seems worth pursuing. Children and adults regularly misapply concepts – e.g. when seeing a dolphin as a fish – with the result that other prototypes (e.g. mammal) end up having to be rectified and adjusted (Churchland 2012, 188-9).
Moreover, according to Churchland, the PVA system provides a semantic substitute for truth in the form of the aforementioned homomorphism or structural conformity between prototype neighborhoods and the structure of some relevant parts of the world.
So the take-home moral of the excursion into the biology of neural adaptation, for Churchland, is that truth is not a necessary condition for the adaptive organization of behaviour and that if we are to understand the relationship between cognitive kinematics and the organization of behaviour we may need to posit units of the cognitive significance other than sentential/propositional ones. This new conception of cognitive significance, he thinks, is liable to be constructive because it will make possible a closer understanding of the connection between the morphogenesis of neuronal systems, the dynamics of representation and the dynamical organization of behaviour.
Strangely, Brassier seems to read Churchland as making a quite different claim in the quoted passage: namely that the superempirical criteria of theory choice or prototype-framework are reducible (somehow) to the adaptive value of trained networks in guiding behaviour:
On the one hand, since ‘folk-semantical’ notions as ‘truth’ and ‘reference’ no longer function as guarantors of adequation between ‘representation’ and ‘reality’, as they did in the predominantly folk psychological acceptation of theoretical adequation – which sees the latter as consisting in a set of word-world correspondences – there is an important sense in which all theoretical paradigms are neurocomputationally equal. They are equal insofar as there is nothing in a partitioning of vector space per se which could serve to explain why one theory is ‘better’ than another. All are to be gauged exclusively in terms of what Churchland calls their ‘superempirical’ virtues; viz. according to the greater or lesser degree of efficiency with which they enable the organism to adapt successfully to its environment. (Brassier 2007, 19)
It is implicit in Churchland’s account that the superempirical virtues must be virtues applicable to neural representational strategies – since these are the more basic elements of cognition to which he alludes in his discussion of van Fraassen. However, it does not remotely follow that these virtues should be identified with “the greater or lesser degree of efficiency with which they enable the organism to adapt successfully to its environment” since, as Churchland emphasizes even here, there is only an indirect relation between “the faithfulness of the brain’s ‘world model’” and its organizational efficacy. For example, the functional value of a prototype scheme for an organism is only indirectly related to its representational prowess or accuracy – factors like speed, ease of acquisition and energy consumption would also need to be factored into any ethological assessment of competing schemes’ costs and benefits. As work in artificial intelligence shows, fast and dirty representational schemes which work in a reliably present-at-hand environmental contexts, while lacking rich representational or conceptual content, seem to be evolutionarily favoured in many instances (See Wheeler 2005).
In fact, there is nothing in this passage that suggests that Churchland thinks that the superempirical virtues must be reduced to evolutionary-functional terms at all – evolutionary theory just does not play this constitutive role in his theory of content or his epistemology.
Of course, it does not follow that Churchland precludes a neurocomputation-friendly understanding the superempirical virtues. He claims that they need to be as applicable to the understanding of epistemological systems that do not incorporate cultural or linguistic components as to those that do. He also implies, as we have seen, that these systems should be understood as engaged in a constructive activity evaluable according to criteria that can be generalized well beyond the parochial sphere of propositional attitude psychology. Churchland states as much when he claims that they are the brain’s “criteria” for distinguishing information from noise: simplicity, coherence and explanatory power need to be interpreted in a generalized manner consilient with the PVA theory of content (See also Brassier 2007, 23).
Churchland thus needs generalized, PVA-friendly account of the superempirical virtues. Brassier agrees but thinks that this requires Churchland to either embrace a neurocomputational version of idealism – which, as a realist, he would not want – or to posit a “pre-constituted physical reality” and thus to “forsake his neurocentric perspective” by adopting a metaphysics which cannot be secured from within a naturalistic framework (Brassier 2007, 20-1).
Well, for sure, no realist worth her salt will want to commit to the claim that reality is constituted by it being a possible representatum of a neurological process. The nearest any contemporary realist comes to this idea is the claim on the part of Ontic Structural Realists that to be is to be a pattern and that a pattern ‘is real’ if the compression algorithm required to encode it requires a smaller number of bits than ‘bit string’ representation of the entire data set in which the pattern resides (Dennett 1991, 34; Ladyman and Ross, 202). But a) this is a far more general constraint on existence than Brassier’s touted neurological variant and should in no way be confused with a commitment to a kind of transcendental subjectivity; b) there is no reason why Churchland has to embrace anything like it (though he might for all I know). From the claim that the superempirical virtues are ascribable, in some form, to neurocomputational structures it does not follow that every constituent of reality must necessarily be accessible to neural coding strategies.
Now, clearly, in order to frame this thought the cognizer must have a concept of reality and a concept of what it is to represent it (e.g. a partial mapping or homomorphism from abstract prototype structure onto abstract world-structure) and these must be embodied in thinker’s neural states, somehow. If we are dualists or if we believe that conceptual content is not a property of neural states, then we will deny that this is possible. However, Brassier does not explain why one should reject the claim that conceptual content is a property of neural states in his critical discussion of Churchland. Indeed he specifically disclaims this critical option earlier on when rejecting Lynn-Rudder Baker’s criticism that Churchland-style eliminativism rejection of propositional attitudes involves a self-vitiating performative contradiction (Brassier 2007, 17).
Does Brassier have any other arrows in his quiver? Well, he argues if the superempirical virtues are “among the brain’s most basic criteria for recognizing information” then all conceptual frameworks that fail to maximize representational adequacy – like FP – would have been eliminated. Thus if simplicity, coherence and explanatory power are constitutive of representational success: “all theories are neurocomputationally equal inasmuch as all display greater or lesser degrees of superempirical distinction” (Brassier 2007, 23). This seems wrong for at least two reasons. Quite obviously, if superempirical distinction is an ordinal concept (as Brassier concedes in this passage) some theories can have more of it than others and will not be neurocomputationally equal. This is a recurrent trope in Churchland’s work: some conceptual frameworks mesh the ontology of natural science with our experience better than others. Learning to discriminate temperatures according to the Kelvin scale, for example, allows us to map our experience more directly onto the regularities expressed in ideal gas laws. Thus Kelvin has greater superempirical distinction than the Fahrenheit and Celsius scales, though, as Churchland amusingly recounts in Plato’s Camera: somewhat less cultural heft in common rooms of the University of Winnipeg (Churchland 2012, 227).
Of course, it is always possible that the empirical and structural virtues of theories might underdetermine theory choice and thus choice of ontology in certain situations. There could, in principle, be theories with disparate ontologies that are equally good by way of whatever variants of simplicity, coherence and explanatory power are applicable to the PVA model. This seems to be right, but this is not the same as all theories being on equal terms. Nor, does this obviously preclude the naturalist framing an ontology that is constrained by these virtues in some way. I conclude that Brassier fails to establish B1. The PVA model does not leave Churchland unable to say why some theories are better than others. And it does not preclude Churchland or the fan of the PVA model from having a naturalistically constrained ontology. But if B1 is not established then B2 – the claim that naturalism is unable to provide a satisfactory account of science – is not established in this reading.
Brassier, Ray (2007), Nihil Unbound: Enlightenment and Extinction, Palgrave-Macmillan.
Churchland, Paul (1985), “The Anti-Realist Epistemology of van Fraassen’s The Scientific Image”, in Images of science, edited by P. M. Churchland and C.A. Hooker, Chicago: University of Chicago Press.
Churchland, Paul (1998) ‘Conceptual similarity across sensory and neural diversity: The Fodor/Lepore challenge answered’ Journal Of Philosophy 95 (1), 5-32.
Churchland, Paul (2007), Neurophilosophy at Work, Cambridge: Cambridge University Press.
Churchland, Paul (2012), Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals, Cambridge Mass: MIT Press.
Ladyman James, Ross Don, (2007), Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford University Press.
Ramsey, William; Stich, Stephen; P. & Garon, J. (1991), ‘Connectionism, eliminativism, and the future of folk psychology’, In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.
Wheeler, Michael (2005) Reconstructing the Cognitive World: the Next Step. MIT Press, 2005.