CFP: SEP-FEP 2014 Utrecht, 3-5 September

February 19th, 2014 | Author: johnm

CALL FOR PAPERS

The Society for European Philosophy and Forum for European Philosophy


Joint Annual Conference

 

Philosophy After Nature

Utrecht University

3-5 September 2014

The Joint Annual Conference of The Society for European Philosophy and Forum for European Philosophy in 2014 will be hosted by the Centre for the Humanities, the Faculty of Humanities and the Descartes Institute, Utrecht University, the Netherlands.

Plenary speakers
Professor Michel Serres, Stanford University, Académie française

Information and Thinking/l’information et la pensée

respondent: Professor Françoise Balibar, Université Paris-Diderot

Professor Rahel Jaeggi, Humboldt-Universität zu Berlin

Critique of Forms of Life

respondent: t.b.a.

Professor Mark B.N. Hansen, Duke University
Entangled in Media, Towards a Speculative Phenomenology of Microtemporal Operations

respondent: t.b.a.

The SEP/FEP conference is the largest annual event in Europe that aims to bring together researchers, teachers and others, from different disciplines, interested in all areas of contemporary European philosophy. Submissions are therefore invited for individual papers and panel sessions in all areas of contemporary European philosophy. For 2014, submissions that address the conference’s plenary theme – Philosophy After Nature – are particularly encouraged. This would include papers and panels that are after nature in the sense of being in pursuit of nature’s consequences. We invite perspectives on critique, science, ecology, technology and subjectivity as bound up with conceptions of nature and  experiment with various positions in contemporary thought.

Abstracts of 500 words for individual paper submissions and proposals for panels should be sent to Rick Dolphijn (philosophyafternature@uu.nl) by 17 May 2014. Proposals for panels should include a 500-word abstract for each paper within the panel. Proposals from academics, graduate students and independent scholars are welcome.
Conference committee: Rosi Braidotti, Bert van den Brink, Rick Dolphijn, Iris van der Tuin and Paul Ziche.

Enquiries: Rick Dolphijn (philosophyafternature@uu.nl)

Tagged with:
 

A highly illuminating discussion of the place of value, meaning and purpose within a naturalistic worldview. H/p synthetic zero

Objective Ecological Value

On December 8, 2013, in Uncategorized, by enemyin1

cthulhu-toyThis is a sketch of a partial value theory that I’ve been developing while completing my book Posthuman Life. If there are similar theories out there, I’d be grateful for links to bibdata so that I can properly acknowledge them!

In order to construct an anthropologically unbounded account of posthumans, we need a psychology-free account of value. There may, after all, be many possible posthuman psychologies but we don’t know about any of them to date. However, the theory requires posthumans to be autonomous systems of a special kind: Functionally Autonomous Systems (see below). I understand  ”autonomy” here as a biological capacity for active self maintenance. The idea of a system which intervenes in the boundary conditions required for its existence can be used to formulate an Autonomous Systems Account of function which avoids some of the metaphysical problems associated with the more standard etiological theory.  The version of ASA developed by Wayne Christensen and Mark Bickhard defines the functions of an entity in terms of its contribution to the persistence of an autonomous system, which they conceive as a group of interdependent processes (Christensen and Bickhard 2002: 3). Functions are process dependence relations within actively self-maintaining systems.

Ecological values are constituted by functions. The conception, in turn, allows us to formulate an account of “enlistment” which then allows us to define what it is to be an FAS.

1)      (ASA) Each autonomous system has functions belonging to it at some point in its history. Its functions are the interdependent processes it requires to remain autonomous at that point.

2)      (Value) If a process, thing or state is required for a function to occur, then that thing or process is a value for that function. Any entity, state or resource can be a value. For example, the proper functioning of a function can be a value for the functions that require it to work.[1]

3)      (Enlistment) When an autonomous system produces a function, then any value of that function is enlisted by that system.

4)      (Accrual) An FAS actively accrues functions by producing functions that are also values for other FAS’s.

5)      (Functional Autonomy) A functionally autonomous system (FAS) is any autonomous system that can enlist values and accrue functions.

People are presumably FAS’s on this account, but also nonhuman organisms and (perhaps) lineages of organisms. Likewise, social systems (Collier and Hooker 2009) and (conceivably) posthumans. To date, technical entities are not FAS’s because they are non-autonomous. Historical technologies are mechanisms of enlistment, however. For example. Without mining technology, certain ores would not be values for human activities. Social entities, such as corporations, are autonomous in the relevant and sense and thus can have functions (process interdependency relations) and constitute values of their own. However, while not-narrowly human, current social systems are wide humans not posthumans. As per the Disconnection Thesis: Posthumans would be FAS’s no longer belonging to WH (the Wide Human socio-technical assemblage – See Roden 2012).

This is an ecological account in the strict sense of specifying values in terms of environmental relations between functions and their prerequisites (though “environment” should be interpreted broadly to include endogenous and well as exogenous entities or states). It is also an objective rather than subjective account which has no truck with the spirit (meaning, culture or subjectivity, etc.). Value are just things which enter into constitutive relations with functions (Definition 2 could be expanded and qualified by introducing degrees of dependency). Oxygen was an ecological value for aerobic organisms long before Lavoisier. We can be ignorant of our values and mistake non-values for values, etc. It is also arguable that some ecological values are pathological in that they support some functions while hindering others.[2]

The theory is partial because it only provides a sufficient condition for value. Some values – Opera, cigarettes, incest prohibitions and sunsets – are arguably things of the spirit, constituted as values by desires or cultural meanings.

References

Christensen, W. D., and M. H. Bickhard. 2002. “The Process Dynamics of Normative Function.” The Monist 85 (1): 3–28.

Collier, J. D., & Hooker, C. A. 1999. Complexly organised dynamical systems. Open Systems & Information Dynamics, 6(3): 241-302.

Roden. 2012. “The Disconnection Thesis.” The Singularity Hypothesis: A Scientific and Philosophical Assessment, Edited by Amnon Eden, Johnny Søraker, Jim Moor, and Eric Steinhart.Springer Frontiers Collection.



[1] An issue I do not have time to consider is that ecological dependency is transitive. If a function depends on a thing whose exist depends on another thing, then it depends on that other thing. Ecological dependencies thus overlap.

[2] Addictive substances may fall into this class.

On November 29, 2013, in Uncategorized, by enemyin1

Nature, has just published a dark philosophical tale by leading philosopher of mind Eric Schwitzgebel and Three Pound Brainer Scott Bakker. Enjoy!

robot pencilIn “The Basic AI Drives” Steve Omohundro has argued there is scope for predicting the goals of post-singularity entities able to modify their own software and hardware to improve their intellects. For example, systems that can alter their software or physical structure would have an incentive to make modifications that would help them achieve their goals more effectively as have humans have done over historical time. A concomitant of this, he argues, is that such beings would want to ensure that such improvements do not threaten their current goals:

So how can it ensure that future self-modifications will accomplish its current objectives? For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit (Omohundro 2008).

I think this assumption of ethical self-transparency is interestingly problematic. Here’s why:

Omohundro requires that there could be internal systems states of post-singularity AI’s whose value content could be legible for the system’s internal probes. Obviously, this assumes that the properties of a piece of hardware or software can determine the content of the system states that it orchestrates independently of the external environment in which the system is located. This property of non-environmental determination is known as “local supervenience” in the philosophy of mind literature. If local supervenience for value-content fails, any inner state could signify different values in different environments. “Clamping” machine states to current values would entail restrictions on the situations in which the system could operate as well as on possible self-modifications.

Local supervenience might well not hold for system values. But let’s assume that it does. The problem for Omohundro is that the relevant inner determining properties are liable to be holistic. The intrinsic shape or colour of an icon representing a station on a metro map is arbitrary. There is nothing about a circle or a squire or the colour blue that signifies “station”. It is only the conformity between the relations between the icons and the stations in metro system it represents which does this (Churchland’s 2012 account of the meaning of prototype vectors in neural networks utilizes this analogy).

The moral of this is that once we disregard system-environment relations, the only properties liable to anchor the content of a system state are its relations to other states of the system. Thus the meaning of an internal state s under some configuration of the system must depend on some inner context (like a cortical map) where s is related to lots of other states of a similar kind (Fodor and Lepore 1992).

But relationships between states of the self-modifying AI systems are assumed to be extremely plastic because each system will have an excellent model of its own hardware and software and the power to modify them (call this “hyperplasticicity”). If these relationships are modifiable then any given state could exist in alternative configurations. These states might function like homonyms within or between languages, having very different meanings in different contexts.

Suppose that some hyperplastic AI needs to ensure a state in one of its its value circuits, s, retains the value it has under the machine’s current configuration: v*. To do this it must avoid altering itself in ways that would lead to s being in an inner context in which it meant some other value (v*) or no value at all. It must clamp itself to those contexts to avoid s assuming v** or v***, etc.

To achieve clamping, though, it needs to select possible configurations of itself in which s is paired with a context c that preserves its meaning.

The problem for the AI is that all [s + c] pairings are yet more internal systems states and any system state might assume different meanings in different contexts. To ensure that s means v* in context c it needs to do to have done to some [s + c] what it had been attempting with s – restrict itself to the supplementary contexts in which [s + c] leads to s having v* as a value and not something else.

Now, a hyperplastic machine will always be in a position to modify any configuration that it finds itself in (for good or ill). So this problem will be replicated for any combination of states [s + c . . . +  . . ..] that the machine could assume within its configuration space. Each of these states will have to be repeatable in yet other contexts, etc. Since concatenation of system states is a system state to which the principle of contextual variability applies, there is no final system state for which this issue does not arise.

Clamping any arbitrary s requires that we have already clamped some undefined set of contexts for s and this condition applies inductively for all system states. So when Omohundro envisages a machine scanning its internal states to explicate their values he seems to be proposing an infinite task has already completed by a being with vast but presumably still finite computational resource.

References

Block, Ned (1986). Advertisement for a semantics for psychology. Midwest Studies in Philosophy 10 (1):615-78.

Churchland, Paul. 2012. Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. MIT Press (MA).

Omohundro, S. M. 2008. “The basic AI drives”. Frontiers in Artificial Intelligence and applications171, 483.

 

 

Tagged with:
 

Rebecca Saxe and Clockwork Orange 2.0

On September 25, 2013, in Uncategorized, by enemyin1

 

In this excellent presentation Saxe claims that Transcranial Magnetic Simulation applied to the  temporo-parietal junction (TPJ) – a region specialized for mentalizing in human adults – can improve the effectiveness of moral reasoning by improving our capacity to understand other human minds.

This suggests an interesting conundrum for moral philosophers working in the Kantian tradition, where recognizing the rationality and personhood of offenders is held to be a sine qua non for justifications of punishment. We can imagine a Philip K Dick style world in which miscreants are equipped with surgically implanted TMS devices which zap them where an automated surveillance system judges them to be in a morally tricky situation calling for rapid and reliable judgements about others’ mental states. Assuming that such devices would be effective, would this still constitute a violation of the offender’s personhood – treating the offender as a refractory animal who must be conditioned to behave in conformity with societal norms, like Alex in a Clockwork Orange ? Or would the enhancement give that status its due by helping the offender become a better deliberator ?

 

Assuming the TMS devices could achieve their aim of improving moral cognition, it seems odd to say that this would be a case of “tiger training” which bypasses the offender’s capacity for moral reasoning since it would presumably increase that very capacity. It is even conceivable that an effective moral enhancement could be co-opted by savvy Lex Luthor types to enhance the criminal capacities of their roughnecks, making them more effective at manipulating others and sizing up complex situations. At the same time, it would be quite different from punishment practices that appeal to the rational capacities of the offender. Having one’s TPJ zapped is not the same as being asked to understand the POV of your victim – though it might enhance your ability to do so.

So an effective moral enhancement that increases the capacity for moral reasoning in the cognitively challenged would neither be a violation of  nor an appeal to to their reason. It would not be like education or a talking therapy, but neither would be like the cruder forms of chemical or psychological manipulation. It could enhance the moral capacities of people but it would do so by tying them into technical networks that, as we know, can be co-opted for ends that their creators never anticipated. It might enhance the capacity for moral agency while also increasing its dependence on the vagaries of wider technical systems. Some would no doubt see such a development as posthuman biopower at its most insidious. They would be right, I think, but technology is insidious precisely because our florid agency depends on a passivity before cultural and technical networks that extend it without expressing a self-present and original human subjectivity.