Synopsis

Refereed articles

Information articles

Notes on contributors

Print friendly version

"Can you think what I feel? Can you feel what I think?": Notes on affect, embodiment and intersubjectivity in AI

Elizabeth A. Wilson

In 1950, at the very end of his paper on computing machinery and intelligence, Alan Turing turns his mind to the future of intelligent machines. Hoping for a close affiliation between humans and computers, Turing wonders about how to start building artificial expertise. He sees two possibilities:

Many people think that a very abstract activity like the playing of chess would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried. (Turing 1950:460).

This pairing of chess and child turned out to be foundational for the development of mainstream Artificial Intelligence (AI) from 1950 onwards. On the one hand, the chess-playing computer--and the disembodied, abstract calculation it is said to enact--became a preoccupation for many researchers in the field. The victory of machine over human at chess has been a particularly important benchmark for the actualisation of intelligent artificial systems (Hsu 2002; Newell, Shaw & Simon 1958). On the other hand, the figure of the child has emerged (albeit only recently) as an equally compelling model for the artificial and computational sciences. In the last decade, AI researchers have been turning to infant development research to help them build robust artificial agents whose intelligence--like that of the growing child--derives from their situated and embodied interactions with the world (Breazeal 2002; Brooks 1999; Lungarella, Metta, Pfeifer & Sandini 2003). This “new AI” (as Rodney Brooks describes it) contends that sensory, perceptual and corporeal data form the frame within which higher cognitive faculties evolve.

Many cultural critics have identified the ways in which mainstream AI has pursued a research agenda that places abstraction (chess) and embodiment (child) in an antagonistic relation (Haraway 1985/1991, Hayles 1999, Lenoir 2002a, 2002b). My work is part of this critical tradition, but I would like to resist the impulse to formulate the relation between chess and child in oppositional terms. While the basic orientation of cultural criticism is on target (i.e., there has been a predisposition to proceeding cognitively rather than affectively in mainstream AI), too strong an attachment to diagnosing the cognitivism of AI tends to build a picture of AI as only concerned with chess-centric problematics. In this paper, I would like to think in more detail about the relation between chess and child, and between their cognates (thinking and feeling; abstraction and embodiment) in early AI. How are these relations managed? Is it always the case the cognitive dominates the affective or that abstraction vanquishes embodiment? My answer will be: no, this is not always the case; and the places where unorthodox relations between thinking and feeling are sought in early AI texts can be highly instructive. In Alan Turing’s work, at least, the possible affiliations between thinking and feeling are not exhausted by the structure of oppositionality or by the unilateral domination of abstract calculation over embodiment and feeling. Without question, some of his texts endorse a conventional separation of intellect and body:

I certainly hope and believe that no great efforts will be put into making machines with . . . characteristics such as the shape of the human body; it appears to me to be quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. (Turing 1951/2004:486)

Yet at the same time, there is an ongoing curiosity in Turing (especially in his marginalia: Copeland 2004; Shieber 2004) for the embodied, for the childish, and for feeling (Wilson 2002). This paper will begin by exploring parts of Turing’s work where he attempts to negotiate between thinking and feeling in innovative and hitherto unnoticed ways. The role of the imaginary (by which I mean the fantasies and thought-experiments in which Turing engaged--be they mechanical, mathematical, intrapsychic or intersubjective) will be central to this exploration.

I am hoping that an examination of the margins of Turing’s work is instructive in at least two ways. In the first instance, it is instructive for those wanting to explore the heterogeneity of post-war AI. The longer I spend with Turing’s work, the more I suspect that this period is poorly understood if it is interpreted only as a series of conventionalising efforts to favour abstraction and cognition over embodiment and feeling. An analysis of the place of feeling in early AI is also instructive for those wanting to think critically about the newly emergent interest in the infantile and the affective in contemporary AI. The second half of the paper will gesture toward a schema for thinking about such recent developments. Andy Clark (2003) has enthused about the dynamic future of the artificial sciences as they integrate world, body and mind. Of course, this contemporary work in AI, robotics and HCI (Human Computer Interaction) does not emerge ex nihilo sometime in the 1990s; question about affect, in particular, have been part of AI from the very beginning. It seems to me that an analysis of the ways in which thinking and feeling were miscegenated in the early years of AI will be useful for how we position ourselves to imagine the future embodiments of affects and cognition in artificial systems. More of which toward the end.

Chess machines

Let me begin, then, with a small encounter between the abstract and the affective in Turing’s published work.

In 1953, just a year before he died, Turing published a paper on chess in the collection ­ Faster than Thought (1953/2004). At this time, chess programming was still in its infancy. In 1954 Norbert Wiener noted that the speed of modern computers was sufficient to calculate only two moves ahead. A full game of chess (about 50 moves) “is hopeless in any reasonable time” (Wiener 1954: 175). For most commentators in this immediate post-war period, chess-playing computers presented not just mathematical or engineering difficulties, but also quandaries of imagination:

Though we have seen that machines can be built to learn, the technique of building and employing these machines is still very imperfect. The time is not yet ripe for the design of chess-playing machines on learning principles, although it probably does not lie very far in the future.

A chess-playing machine which learns might show a great range of performance, dependent on the quality of the players against whom it is pitted. The best way to make a master machine would probably be to pit it against a wide variety of good chess players. On the other hand, a well-contrived machine might more or less be ruined by the injudicious choice of its opponents. A horse is also ruined if the wrong riders are allowed to spoil it. (Wiener 1954:177)

Not only does Wiener rely on fancy to substantiate the parameters of artificial intelligence (imagined contests between as-yet unbuilt machines and un-named opponents), he also infers that intersubjectivity (specifically, the interaction of machine and human) may be fundamental to how that intelligence is built. As we will see, imagination and intersubjective relations are also important to how Turing conceives of artificial intelligence.

In 1953 Turing approaches the problem of chess-playing machines in his characteristically unorthodox manner. Before he gets into the details of how such a machine might be built, and as he is laying out the parameters for thinking about a chess program, he takes a small detour:

[to the questions already asked about the specifics of what kind of chess-playing machine we are aiming to build] we may add two further questions, unconnected with chess, which are likely to be on the tip of the reader’s tongue:

Could one make a machine which would answer questions put to it, in such a way that it could not be possible to distinguish its answers from those of a man?

Could one make a machine which would have feelings like you and I do? (Turing 1953/2004:569).

Turing claims that a taste for questions like these belongs not to himself but to his readers. However, we ought not be overly influenced by this attempt at deflection--there is a quiet, persistent interest in the relation between affect and machinery in a lot of Turing’s work. While he says that he considers these questions (can machines think, can they feel?) unconnected with chess, they are often intimately connected--conceptually--to the artificial systems he imagines, and to how those systems might network with humans.

The first of these questions is recognisable as what we now know as the Turing Test. Can a machine be built that would behave (within certain narrow parameters) in a manner indistinguishable from a human? Commentaries on the Turing test often miss the point that the question ‘Can machines think?’ was deployed by Turing as an imaginative--rather than literal--challenge: “we are not asking whether all digital computers would do well in the [imitation] game nor whether computers at present available would do well, but whether there are imaginable computers which would do as well” (Turing 1950: 436). In this sense, the Turing test is an exploration of the imaginative limits of computers (Wilson 2002); and his 1950 paper is a plea to keep thinking inventively about the possibilities of machinic intelligence. So too with affect and computers. The question--could one make a machine which would have feelings like you and I do?--is less an engineering query than it is a provocation about whether it is conceptually feasible to coassemble affect and machinery. When we contemplate the possibility of feeling machines, what kinds of research projects do we initiate? What new computational ambitions are generated? What kinds of human-computer interaction do we wish for?

There is no special emphasis in Turing’s work on particular affects and their instantiation in machines. He tends not to invoke fear, say, or enjoyment or anger or distress as he imagines affective machines (for an example of such a focus on one particular affect, see Masano Toda’s (1982) Emotional Fungus-Eater Robot which is governed primarily by fear). Rather, Turing seems to be motivated by a curiosity about what the affects in general might produce--what the effects of affectivity might be. Specifically, affectivity is often mobilised by Turing to explain how relations between agents (particularly between humans and machines) are possible. At these moments Turing seems to be hypothesising that affectivity promotes interactivity: the affects are the glue that keeps one agent in touch with another. Even more specifically, it seems that affectivity gives Turing some kind of access to the inside of these agents, and their mutual, internal effects on each other. That is, the affective interactivity between human and artificial agents, as imagined by Turing, is one that involves an epistemology of the interior of those agents.

Hypotheses like these run counter to the prevailing feeling in contemporary critical studies that AI has been structured by a cognitivism that fiercely repudiates the emotional or embodied nature of artificial expertise. It has also been usual to comment on the strangely behaviourist nature of Turing’s work; the Turing test in particular seems to eschew any interest in interior states (Shieber 2004). Let me expand, then, on these other affectively-oriented paths in Turing. On first reading, an epistemology of the interior seems to be rejected by Turing in the 1953 paper on chess. In answer to the question “Can machines feel” Turing says: “I shall never know. Any more than I shall ever be quite certain that you feel as I do” (Turing 1953/2004: 569). If we were to take him at his word at this moment, the emotional interior of other agents seems to be inaccessible: I shall never be quite certain how you feel. This suggestion echoes an argument about solipsism already dealt with by Turing in 1950. Responding to a criticism that no machine has an interior life (that it merely behaves rather than thinks), Turing said:

According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks but A does not’. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks. (Turing, 1950: 446).

There has been a gentleman’s agreement, if you like, that humans have cognitive capacities. A cognitive interiority is attributed to others, although the nature of that interiority is assumed rather than interrogated. In 1953, however, polite convention is abandoned. Turing seemingly accepts solipsism in relation to affect--for he shall never be quite certain that you feel as he does. Jack Copeland, one of Turing’s most ardent champions, underlines this disparity between thinking in 1950 and feeling in 1953:

[Turing’s] views appear to be that the question ‘Can machines think?’ is independent of the question whether machines can feel, and that an affirmative answer may be given to the former in the absence of our having any answer at all to the latter (Copeland, 2004: 566).

It is my contention is that there isn’t as much independence between these two domains (thinking and feeling) as Copeland implies. While in 1953 Turing does not extend his gentleman’s agreement to cover affective states, elsewhere he often couples thinking and feeling in such a way that makes me doubt the argument that he simply refuses affect, that he blocks it, or considers it irrelevant to intelligent machinery and the networks of interactivity in which they are situated. A more careful placement of this comment (“I shall never be quite certain that you feel as I do”) in relation to other work--specifically the marginalia--suggests that Turing is struggling to make sense of the interiority of agents rather than simply rejecting the idea of an internal affective landscape altogether; and often he seems to be interested in the commerce that is being transacted between interiorities.

If I might be allowed to use biographical data to support this view, let me repeat a well-known anecdote from Andrew Hodges’ biography of Turing (Hodges 1983). Arnold Murray tells Hodges about an incident early in his sexual relationship with Turing--a relationship that would eventually trigger the circumstances which lead to Turing’s arrest on charges of gross indecency. This incident happened about the time the chess paper was written (around January 1952), and this is also about the time Turing had started a Jungian analysis and had become very interested in the interpretation of his dreams. Murray and Turing are lying on the floor after dinner, a little drunk, and Murray tells Turing about a recurrent nightmare from childhood in which he is “suspended in absolutely empty space while a strange noise would start, growing ever louder, until he woke up in a sweat” (Hodges 1983: 452). When Murray is unable to elaborate on what kind of noise it was, Turing steps in and offers his own fanciful set of associations: the space is like an aircraft hangar, and the hangar itself is a mechanical brain in which Turing is trapped and he has to play chess with the machine in order to be released. He would defeat this machine at chess by distracting it: first he would make it angry and then make it feel intellectually superior. He says “with terrific emphasis” to Murray at some point in this story “Can you think what I feel? Can you feel what I think?” (452).

This example of the interimplication of affect and cognition--can feelings be thought? can thoughts be felt?--is indicative of how Turing structures the question of affect and machinery in other, more formal contexts. At important junctures, Turing imagines thinking and feeling to be chiasmatically related rather than opposed or disjunctive. That is, thinking and feeling can cross over or fold into each other. They can be transposed. This is why he suggests in 1950 that both chess (abstraction) and the child (sensate embodiment) would make good models for the future of AI; and why in 1953 for no obvious reason he inserts a question about affect into a paper on chess: Turing suspects (or perhaps only hopes) that there is a trajectory from thinking to feeling and from feeling to thinking. More specifically, this alliance between thinking and feeling isn’t a chaste cohabitation; rather than simply placing thinking and feeling side-by-side, Turing supposes that each contains the trace of the other. These two capacities don’t just abut, or lean on each other; rather they are projected and introjected into each other. Cognition inhabits and modifies feeling, as feeling inhabits and modifies thinking.

The American psychologist Silvan Tomkins has been perhaps the most articulate theorist of such cognitive-affective admixtures, which he sees as the rule rather than the exception of human psychology. Let me detour briefly through Tomkins’ work in order to amplify what is still conceptually juvenile in Turing. Arguing that the coevolution of affects (or motives) and cognitions has meant than neither can unilaterally govern the other, Tomkins takes the human psyche to be fundamentally composite and chiasmatic. He is worth quoting at length on this point:

Seen in the evolutionary nexus, both the motivational [i.e. affective] and cognitive systems must have evolved so that together they guaranteed a viable, integrated human being. It could not have been the case that either ‘motives’ or ‘cognitions’ should have been dominant since both halves of the total system had to be matched, not only to each other but, more important, to the environmental niche of the species. There is a nontrivial sense, then, in which the whole human being could be considered to be ‘cognitive’ (rather than being divided into a motivational system and a cognitive system). Because of the high degree of interpenetration and interconnectedness of each part with every other part and with the whole, the distinction we have drawn between the cognitive half and the motivational half must be considered to be a fragile distinction between transformation and amplification as a specialized type of transformation. Cognitions coassembled with affects become hot and urgent. Affects coassembled with cognitions become informed and smarter. The major distinction between the two halves is that between amplification by the motivational system and transformation by the cognitive system. But the amplified information of the motivational system can be and must be transformed by the cognitive system, and the transformed information of the cognitive system can be and must be amplified by the motivational system. Amplification without transformation would be blind; transformation without amplification would be weak. (Tomkins 1992: 7)

Importantly for the argument I am making here, Tomkins’ very strong theory of affect (in which he argues that the affects are the primary source of motivation of human behaviour) dovetails with artificial systems. He opens the preface of volume IV of Affect, imagery, consciousness by noting how Norbert Wiener’s work in cybernetics (which he first encountered in the early 1960s) offered the same kind of model that he (Tomkins) had been establishing in relation to affect. Cybernetics imagined a system of “multiple assemblies of varying degrees of independence, dependence, interdependence, and control and transformation of one by another. It was this general conception which, one day in the late 1940’s resulted in my first understanding of the role of the affect mechanism as a separate but amplifying co-assembly. I almost fell out of my chair in surprise and excitement” (Tomkins 1992: xiii). In the early 1960s, as he was writing the first two volumes of his affect theory, Tomkins was engaged in various attempts to think about affect and artificiality together: could one artificially simulate a neurotic process or temperament? What would an artificial human feel like to its interlocutors? Could artificial thinking be heated by affect? (Tomkins & Messick 1963). A decade before Tomkins encountered cybernetics, Turing seems to have had a nascent awareness of the various alliances into which affect and cognition (human or artificial) might enter: feeling might amplify thinking, or obstruct it or incite it; thinking might partition and elaborate feeling, or smother it. Most often, Turing’s attempts to imbricate thinking and feeling are sporadic and unsuccessful; but this makes them no less instructive than his clear philosophical, mathematical and engineering triumphs. Turing’s philosophical methodology makes it difficult for him to elaborate on imaginative linkages amongst affect, machines, and interiorities; and perhaps this is why in 1953 we see this strange gesture of approach to the question of feeling machines and then withdrawal.

Psychoanalytic machines

This question “Can you think what I feel, can you feel what I think” doesn’t only bring our attention to Turing’s interest in how feeling and thinking might co-assemble; in a less noticeable way, it brings the interiority of thinking and feeling agents to the fore. It suggests that Turing and Murray are joined in the same kind of chiasmatic organisation as are thinking and feeling--they transform and amplify each other. This formulation of the relation between himself and Murray is profoundly anti-solipistic: it is no longer agnostic about the interiority of other agents and it is the beginnings of a schema for thinking about how Turing might know about what is inside Murray emotionally, how Murray might feel what is inside Turing cognitively.

This traffic between the psychic interior of agents has been a central concern for psychoanalysis. Psychoanalysis--especially its contemporary forms--remains the premier discourse for thinking about interiority, interactivity, interaffectivity and intersubjectivity. The post-war period saw increasing hostility between psychoanalysis and the cognitive sciences--despite the hopes embodied in the interdisciplinary Macy Conferences, where analysts were seated with mathematicians ( Von Foerster 1950: 224). These days, it would seem that these are two fields of knowledge utterly distinct in terms of their axiomatic commitments. It is my suspicion, however, that there is more promise in a psychoanalytic-AI alliance than we have hitherto presumed. I turn to psychoanalysis in this final section in order to try out how it may be useful for thinking about affect and artificiality.

Psychoanalysis claims that the process of bringing others inside (introjection) is one of the first psychic events in the infant’s life, and a crucial accomplishment for anyone who is to attain a stable subject position. It was Sándor Ferenczi, a colleague and close friend of Freud, who first suggested the term introjection:

I described introjection as an extension to the external world of the original autoerotic interests, by including its objects in the ego. I put the emphasis on this ‘including’ and wanted to show thereby that I considered every sort of object love (or transference) both in normal and in neurotic people (and of course in paranoiacs as far as they are capable of loving) as an extension of the ego, that is, as introjection. (Ferenczi 1912: 316)

Introjection is a process whereby the outside world is included or integrated ( Einbeziehung) into the core of one’s psychic structure. For those more familiar with, say, Andy Clark than Ferenczi, it might be useful to think of introjection as a libidinised extended mind. This extension of mind to the world is also the making of mind; as Clark notes: “various kinds of deep human-machine symbiosis really do expand and alter the shape of the psychological processes that make us who we are” ( Clark 2003: 32).

Nicolas Abraham and Maria Torok have offered an important clarification of the introjective process by differentiating introjection from incorporation. Incorporation is a singular/instantaneous event, provoked by a loss that for some reason cannot be acknowledged or communicated. In incorporation, the object is brought inside and entombed; it is a secretive manoeuvre that forms a pathological core that prevents the subject from mourning the lost object. Incorporation, in the sense developed by Abraham and Torok, is a compensation: “in order not to have to ‘swallow’ a loss, we fantasize swallowing (or having swallowed) that which has been lost” (Abraham and Torok 1972: 126). Introjection, on the other hand, is a more extensive process than the ingestion of an object. First, it doesn’t require a loss or trauma; introjection is more quotidian. Second, it involves the broadening of the ego: what is taken in is not an isolated object, but rather “the sum total of the drives, and their vicissitudes as occasioned and mediated by the object” (Torok 1968: 113). In contrast to the deadening effects of incorporation, introjection is a processes of self-fashioning--an ongoing negotiation with, and acquisition of, the world ( Rand 1994): “Introjection does not tend toward compensation, but growth” (Torok 1968: 113).

The expression, recognition and containment of affects is thought by many contemporary psychoanalytically-inclined developmental theorists to be one of the primary mechanisms by which such introjective, growth-oriented processes are established in infancy. Affective states (often in very raw and very negative form) are the first, and remain the most fundamental substrate of intersubjectivity. The capacity to imagine the interiority of the other (and thus to reflect on one’s own mental states and to develop a robust sense of self and agency) is closely tied to affect regulation in infancy and beyond:

Our understanding of mentalization is not just a cognitive process, but developmentally commences with the ‘discovery’ of affects through the primary-object relationships. For this reason, we focus on the concept of ‘affect regulation,’ which is important in many spheres of developmental theory and theories of psychopathology… Affect regulation, the capacity to modulate affect states, is closely related to mentalization in that it plays a fundamental role in the unfolding of a sense of self and agency. In our account, affect regulation is a prelude to mentalization; yet, we also believe that once mentalization has occurred, the nature of affect regulation is transformed. Here we distinguish between affect regulation as a kind of adjustment of affect states and a more sophisticated variation, where affects are used to regulate the self. (Fonagy, Gergely, Jurist & Target 2002: 4–5)

In psychoanalytic circles, these growth-oriented processes are envisaged as strictly humanist events--transactions between mother and child, paradigmatically. A machine or a computational device figures in such contexts only in negative ways: at its most benign, as a kind of affectlessness (robotic behaviour; alexithymia), and at its worse as psychosis (the influencing machine). Using Abraham and Torok’s terminology, we could say that for most psychoanalytic theorists relations with machines are incorporative (deadening), rather than orienting the subject toward growth. Elisabeth Roudinesco’s (2001) defense of psychoanalysis against the domination of pharmaceutical treatment and the reductiveness of the neurocognitive sciences is a case in point: she argues that the Freudian notion of the unconscious is not assimilable to cognitive, experimental or artificial models of the psyche. Even as she tries to articulate some kind of rapprochement between psychoanalysis and the neurosciences, she deprecates attempts to think of the psyche as a cerebral machine or the subject as an automata. For Roudinesco, the task for psychoanalysis is to “bring a humanist response to the gentle and death-dealing savagery of a depressive society tending to reduce human beings to machines without thought and feeling” (55. See also Derrida and Roudinesco 2004).

This notion that machines are entities radically detached from thought and feeling--that an attachment to machines or an identification with them necessarily entails affectlessness--is widespread in psychoanalytic literatures. It seems to me, however, that (following Turing’s lead) there is a more complex story to be told about how the human psyche connects with, elaborates, fantasizes about and introjects machines. Perhaps the artificial, the computational or the machinic are not as foreign to psychically robust subjects or to dynamic affective alliances as one might first imagine. Take, for example, Bruno Bettelheim’s (1967) moving account of Joey, the autistic boy who builds machines (real and imaginary) in order to function in the world. While not strictly psychoanalytic (or at least Bettelheim’s relation to psychoanalysis seems to be mired in scandal, Pollak 1997), the case history is nonetheless significant for the narrative detail of how machinic and intrapsychic structures can become imbricated. Similarly, while Joey’s machines are not intelligent machines in the way envisaged by mainstream AI, like some of Turing’s imaginary machines, their strange affiliations with humans and with affective life push us to think in new ways about the character of artificial-human contact.

At the outset, the Joey case history seems to demonstrate what an analyst already suspects about machines: that they obstruct relatedness to others. From a very young age Joey had been totally preoccupied with machines, especially fans and propellers. All his activities appear to be compensatory--they are narrowly and repetitively restricted to things rather than people:

His intense and obsessive preoccupation with fans ruled out all contact with reality. Nothing claimed his attention except what could become a gyrating propeller, such as a shovel, a leaf, a spoon, or a stick. No encouragement could motivate him, for example, to use the gyrated shovel for digging . . . His total lack of responsiveness to anything alive and his fascination with things mechanical formed a dramatic contrast. What is normally taken for granted in any therapeutic relation--that the therapist is there for the child--presented in this case a near hopeless problem. His orbit was so solitary that it seemed impossible to meet him as he circled on it, oblivious to all. (243)

However, Bettelheim’s account of Joey’s treatment also suggests a relation between his machines and other people that is dynamic and inventive. The machines don’t simply block affect and keep others at bay; they also convey self-states, and they have the effect of drawing people into his world. It is through machines that Joey is eventually able to resuscitate a rudimentary process of introjection:

During Joey’s first weeks with us we watched absorbedly for example, as he entered the dining room. Laying down imaginary wire he connected himself with his source of electrical energy. Then he strung the wire from an imaginary outlet to the dining room table to insulate himself, and then plugged himself in . . These imaginary electrical connections he had to establish before he could eat, because only the current ran his ingestive apparatus. He performed the ritual with such skill that one had to look twice to be sure there was neither wire nor outlet nor plug. His pantomime was so skilled, and his concentration so contagious, that those who watched him seemed to suspend their own existence and become observers of another reality. (235)

Bettelheim’s case history moves restlessly between these two understandings of machines: sometimes they are affectless obstructions, at other times they are the very means of connection and communication. Sometimes Joey separates human-ness from machinery. He says: “There are live people and then there are people who need tubes” (253) “Machines are better than people. Machines can stop” (260). At other times the human, the bodily, the affective and the machinic are wholly imbricated for Joey. He says: “That light bulb is going to have a temper tantrum” (254), “He broke my feelings” (313), “I used to break my tubes when I got mad. I let them lie on the floor, hurt them, stepped on them, made them bleed. I let them bleed all day” (254). The Bettelheim case history is a more articulate account than Turing’s about how we might imagine the coassembly of affects and machines. By following Joey’s recovery we are able to document how the machinic might be a means through which the world can be brought inside, affects regulated and one’s sense of self and agency expanded. Both Bettelheim and Turing manifest anxiety about the chiasmatic, introjective pathways between machines and the psychic interior; yet both also point to the possibility that robust transferential alliances might be formed through a machinically oriented imagination.

Sometimes machines are the very means by which we can even stay alive psychically; and they can as readily be a means for affective expansion and amplification as for affective attenuation. If machines can be accessed psychically in this way--if they can become a means for exploring affect and self states; or, sadly, if they can become (as they so often do) the last prospect of emotional connection--then there must be some kind of intrinsic affinity, some kind of intuitive alliance between the machinic and the affective. While it may become easy enough to incorporate (entomb) affects inside artificial agents (e.g., inside the banks of computer terminals attached to affective agents), one of AI's most demanding tests will be to stay attuned to the wider circuits of affectivity that infuse machinic imaginaries. Psychoanalysis, despite its own internal difficulties with thinking artificially, may be a very useful ally for such future work in AI, robotic and HCI. As the new cognitive and artificial sciences give sustained attention to the role of affect in artificial systems, one of the most important challenges will be to operationalise affectivity in ways that facilitate pathways of introjection between human and machines, between the world and the inside, and between different modes of embodiment. It was this kind of dynamic structure--the imbrication of psyche and machine, of a subject with its artificial objects, and of one inside with another--that Turing glimpsed in someone else’s dream while lying drunk on his dining room floor.

References

Abraham, Nicolas and Maria Torok. (1972/1994) “Mourning or melancholia: Introjection versus incorporation.” In The Shell and the Kernel: Renewals of psychoanalysis. Volume 1 (Ed. & Trans., Nicholas Rand) (pp. 125–138). Chicago: University of Chicago Press

Bettelheim, Bruno. (1967). The Empty Fortress: Infantile Autism and the Birth of the Self. New York: Free Press

Breazeal, Cynthia. (2002) Designing Sociable Robots. Cambridge, MA: Bradford Books/MIT Press

Brooks, Rodney. (1999) Cambrian Intelligence: The Early History of the New AI. Cambridge, MA: MIT Press

Clark, Andy. (2003) Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press

Copeland, B. Jack. (Ed.) (2004) The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life; Plus The Secrets of Enigma . Oxford: Clarendon Press

Derrida, Jacques and Elisabeth Roudinesco. (2004) For What Tomorrow… A Dialogue (Trans., Jeff Fort). Stanford: Stanford University Press. (Chapter 4: “Unforeseeable Freedom”)

Ferenczi, Sándor. (1909) “Introjection and transference.” In First Contributions to Psycho-analysis (Trans., Ernest Jones) (pp. 35–93). New York: Brunner/Mazel

———. (1912) “On the definition of introjection.” In Final Contributions to the Problems and Methods of Psycho-analysis (Ed., Michael Balint) (Trans., Eric Mosbacher) (pp. 316–318). New York: Brunner/Mazel

Fonagy, Peter, Györy Gergely, Elliot Jurist and Mary Target. (2002) Affect Regulation, Mentalization, and the Development of Self. New York: Other Books

Haraway, Donna. (1985/1991) “A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century.” In Simians, Cyborgs, and Women: The Reinvention of Nature (pp. 149–181). New York: Routledge

Hayles, N. Katherine. (1999) How We Became Posthuman: Virtual Bodies In Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press

Hodges, Andrew. (1983) Alan Turing: The Enigma. New York: Simon and Schuster

Hsu, Feng-Hsiung. (2002) Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Princeton: Princeton University Press

Lungarella, Max, Georgio Metta, Rolf Pfeifer & Giulio Sandini (2003) “Developmental robotics: A survey.” Connection Science 15:4 pp. 151–190

Lenoir, Timothy. (2002a) “Makeover: Writing the body into the posthuman technoscape. Part One: Embracing the posthuman.” Configurations: A Journal of Literature, Science, and Technology 10:2 pp. 203­–220

———. (2002b) “Makeover: Writing the body into the posthuman technoscape. Part Two: Corporeal axiomatics.” Configurations: A Journal of Literature, Science, and Technology 10:3 pp. 373­–385

Newell, Allen, J. C Shaw, & Herbert Simon. (1958) “Chess playing programs and the problem of complexity.” IBM Journal of Research and Development 2 pp. 320–335

Pollak, Richard. (1997) The creation of Dr. B: A biography of Bruno Bettelheim. New York: Simon & Schuster

Rand, Nicholas. (1994) “New perspectives in metapsychology: Cryptic mourning and secret love.” In The Shell and the Kernel: Renewals of psychoanalysis. Volume 1 (pp. 99–106). Chicago: University of Chicago Press

Roudinesco, Elisabeth. (2001) Why Psychoanalysis? (Trans., Rachel Bowlby). New York: Columbia University Press

Shieber, Stuart. (Ed.) (2004) The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Cambridge, MA: MIT Press

Toda, Masanao. (1982) Man, Robot, and Society: Models and Speculations. Boston: Martinus Nijhoff Publishing

Tomkins, Silvan. (1992) Affect, Imagery, Consciousness. Volume IV. Cognition: Duplication and transformation of information. New York: Springer

Tomkins, Silvan and Samuel Messick. (1963) Computer simulation of personality: Frontier of psychological theory. New York: John Wiley.

Torok, Maria. (1968/1994) “The illness of mourning and the fantasy of the exquisite corpse.” In The Shell and the Kernel: Renewals of psychoanalysis. Volume 1 (Ed. & Trans., Nicholas Rand) (pp. 107–124). Chicago: University of Chicago Press

Turing, Alan. (1950) “Computing machinery and intelligence.” Mind 59:236, pp. 433–460

———. (1951/2004) “Can digital computers think?” In The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life; Plus The Secrets of Enigma (Ed., B. Jack Copeland) (pp. 482–486). Oxford: Clarendon Press

———. (1953/2004) “Chess.” In The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life; Plus The Secrets of Enigma (Ed., B. Jack Copeland) (pp. 569–575). Oxford: Clarendon Press

Von Foerster, Heinz . (Ed.) (1950) Cybernetics: Circular causal and feedback mechanisms in biological and social systems. Transactions of the Seventh Conference March 23-24, 1950. New York: Josiah Macy Foundation

Wiener, Norbert. (1954) The Human Use of Human Beings: Cybernetics and Society (rev. ed.). Boston: Da Capo

Wilson, Elizabeth. (2002) “Imaginable computers: Affects and intelligence in Alan Turing.” In Prefiguring Cyberculture: An Intellectual History (Eds. Darren Tofts, Annemarie Jonson & Alessio Cavallaro) (pp. 38–51). Cambridge, MA: MIT Press