022521_The Inverse Uncanny Valley: What we see when AI sees us…

Benjamin Bratton has made me think in this talk he gave as a response to AI and cultural directions in the development of the tech, all in conjunction to the Claudia Schmuckli (curator) exhibit, The Uncanny Valley, at the de Young. Bratton begins with the really important query…why are we making AI a reflection of ourselves…(as if we really know what ‘human’ is). Why do we need to envision AI as something like ourselves? He points to the problematic formulation of this conundrum with a series of Sophia-like robot images, and bot-types, and likens the intentions to copy ‘human’ to an awful circus that transpired around computer scientist/theorist Alan Turing’s homosexuality. As soon as Turing was outted, his story was taken from him & stuffed violently back into the closet to such a degree (he was made to take medications to make him “normal”) that it drove Turing to his untimely death. Bratton’s precise comparison of the push to make AI in our own image, and the crushing suppression of Alan Turing, is a critique of the type of thought (philosophy of technology) which Bratton sees as a similar violent direction. He makes it clear that by taking this direction we are most importantly overlooking the fundamental difference of machine intelligence/AI which is that it is not something which is not ‘human’,but rather it is something inherently tied up and bound to human intelligence and that we are, essentially missing an opportunity to learn from what we cannot readily know — what we might “see” about ourselves –and what the intelligence of machines really as to offer. An example he develops is to talk about the violent pathologies inherent in the programming of facial recognition software as it profiles and categorizes its subjects. In this act of analysis, the problem is the transposing of the pathologies back into and inculcating them within the code and the powerful delivery of the software. This critique has been started on AI as well, in basic questions regarding the formulation of algorithmic logic. Bratton echoes Donna Haraway’s appeal in her1982 Cyborg and Simian manifesto to see the human/non-human aspect of techonology as inextricably linked together in the formulation of it all and through which she rejects the binary logic which would separate- human from not-human – when in fact, there is always an intertwined relevance of one to the other.

Bratton illustrates his powerful comparisons with great precision, and he argues that it is  simply a “wrong” direction to go in to try to “make AI human” and the bearer of bad logics.  He notes efforts to invent a palatable ‘Siri’ and ‘Alexa’ home-attendant to the variety of leanings; or the “philosopher in a petri dish” and the domesticating of AI and misunderstanding of its power altogether. Instead of modelling AI and its capacity to automate on ‘human’ as ‘human’ is constructed, he suggests a transdisciplinary approach to view AI, instead, as a distributed intelligence  about which we know very little, and through which we might learn something. He suggests too that we might not want to know what it sees about us. Examples & images explore  Japanese history, dolls, robotics and the cultural traditions from which AI emerged. It’s a clearly voiced and hopeful theoretical starter-kit  on key debates in the production/reproduction of AI.

Note: You may need to subscribe to this excellent youtube channel – Fine Art Museums of San Francisco – in order to see this, but here is the title & link: The Inverse Uncanny Valley: What We See When AI Sees Us.