Constantly ready for the fact that we don't understand things

5. 5. 2023

Palo Fabuš, images generated by Nikola Ivanov and AI system DALLE 2, 2023
Palo Fabuš, images generated by Nikola Ivanov and AI system DALLE 2, 2023

Interview: Nikola Ivanov and Palo Fabuš

Can less typical models of human thought processes help us understand artificial intelligence? Why is it important to have empathy for differences and new kinds of subjectivity? In what ways is reason understood from a human perspective limiting? How do humans and technology shape each other? Can thinking and feeling be separated? We talked with theorist Palo Fabuš about neurodiversity and artificial intelligence.

NI     What does “neurodiversity” mean to you?

PF     I came across this concept based on my long-standing interest in artificial intelligence. These two themes share the challenge of looking at different mindsets than the ones we are familiar with, and which have so far been marginalised. In terms of its relationship to artificial intelligence, neurodiversity offers a field in which we might discover new approaches to thinking differently. Neurodiversity is a complex phenomenon – it is a set of attitudes, a movement that seeks to rethink modern approaches to diagnoses such as autism, ADHD, dyspraxia, dyslexia, dyscalculia and others. It is an effort not to see them as diseases that we should treat but as personality components and mindsets that deserve to be accepted on their own terms. Not to see them as a deficit or deviation from the desired normal.

NI     Neurodiversity is a relatively new term, first used by sociologist Judy Singer 25 years ago. Has the attitude of mainstream society towards neurodivergent people changed since then?

PF     It is changing on several levels. Not only at the level of how we perceive these people but also at the level of evaluating their diagnosis. Self-diagnosis starts to be seen as valid. We are beginning to understand that, on the one hand, medical science does not have a monopoly on diagnosis and, on the other, that neurodivergent people see their diagnosis as part of their identity and not as a defect. On yet another level, it is an effort to rethink the culture of otherness, to take it as an opportunity to learn a new kind of sensitivity, the ability to pose questions, to be open to others, not only to neurodivergent people but to take others as they are in general. At the same time, it demands that we not relate to the world around us, to others or to ourselves as something that can be explained by general concepts and established rules. It asks us not to close our minds and to be constantly prepared for the fact that we do not understand things, that we make mistakes in trying to gauge them and determine what is right and trustworthy.

NI     Besides pathologisation, are there examples of tolerance, inclusion or awareness of the positive contribution of neurodivergent people to society in history?

PF     Today, there is a tendency to return to historical figures and reinterpret them as neurodivergent personalities. For example, there is talk that Isaac Newton was probably on the autistic spectrum; according to one study, Arthur Schopenhauer's texts are characterised by a metonymic way of thinking that is strikingly reminiscent of the thinking of some people on the spectrum, etc. I hope that even on that basis, we can gradually deconstruct the notion of "mad scientists", which is more a label of disinterest and exclusion.

NI     How exactly did you come to connect the topics of artificial intelligence and neurodiversity? In what ways do you see their parallels?

PF     I grew up with science fiction, so artificial intelligence is probably my oldest interest regarding thinking about the future but we can already experience it today. I have a certain distaste for the way AI is sometimes presented. I don't necessarily mean the terminator clichés. Even among academics and artists, there are atavisms that see AI through the old lens. They imagine it as an artificial personality capable of competing with, for example, human artistic creation. The problem is that these atavistic notions of the subject are too reductive and conservative. I think it is a pity that instead of attempting to discover new kinds of thinking on their own terms, we are constantly trying to liken them to humans. Neurodiversity is one of those approaches that could teach us a new way of looking at artificial intelligence. Consider the writings of Alan Turing on future artificial intelligence. We can see that he and his famous test of whether artificial intelligence is capable of really thinking do not assume that it could look any different from human intelligence. Moreover, this is not a general comparison with a human but with a white adult Western human, most likely male. If we were to apply this test to people from another culture, women and children or neurodivergent persons, they would most likely fail it and thus not be considered thinking.

NI     Which is not the case, of course…

PF     Of course. Artificial intelligence applied in various ways – whether we're talking about a vacuum cleaner that learns about a room before vacuuming it, a car that learns to react to an impending collision, a device that detects a cancerous tumour or the state of a beehive – works better than a human. Planetary logistics systems are moving towards what might be described as a singular calculative entity. We see all sorts of advisors, whisperers, demons, semi-autonomous bots. A whole fauna of intelligences is being born before our eyes and it is a shame to subject it to the criterion of human thinking, which, moreover, is not as broadly human as we might have thought. There is a panoply of new subjectivities that are full-fledged intelligences without having a personality of their own.

NI     But how can we imagine these differences?

PF     This is difficult to imagine and calls for a new kind of sensitivity because, for these very reasons, we do not understand artificial intelligence or neurodivergent persons and interpret them through the prism of neurotypical thinking as flawed, deficient and deserving of exclusion. I want to avoid equating neurodivergent persons with computers or robots, as is sometimes stereotyped even today. Rather, my point is that these two topics are related as two calls for openness to accept other forms of thinking as valid, wholesome and, above all, not reducible to what we already know. Bernard Stiegler and his idea of the mutual shaping of humans and technology or Donna Haraway and her idea that digital technologies lead to the feminization of work, which creates opportunities to see a female perspective, also lead me to this. It is interconnected on multiple levels. I also look for this anthropotechnical reciprocity in thinking about artificial intelligence and neurodivergent persons.

NI     In his book New Dark Age, James Bridle shows that we are increasingly dependent on sophisticated technologies without understanding how they work. He argues that we are facing a dystopian future, the manifestations of which we are already seeing today – an increase in inequality, surveillance, fundamentalism, conspiracy theories. A concomitant of artificial intelligence is also 'artificial stupidity'. In this light, do not techno-optimistic visions seem naive? What do you see as the possible risks of artificial intelligence?

PF     I am not concerned with a techno-optimistic vision that it will turn out well but rather with the opportunities and problems that are opening up before us. Measuring our dependence on technology is difficult, as it has been with us since time immemorial. Even the latest archaeological findings suggest that Australopithecus, the ancestor of homo, was using it, so technology predates modern humans. Sure, we are dependent on technology but we have been since the Neolithic Revolution with the development of agriculture. When we invented the phonetic alphabet, we became dependent on the ability to create written records, something Plato had already warned against, that we were compromising our ability to remember. That's why I like the thesis that technology acts as prostheses for limbs we never had. It expands, differentiates, deprives us of something, adds something to us. We differ from earlier cultures in our heightened self-reflexivity, even hyper-self-reflexivity. With the rapid development of technology also comes its rejection, the pursuit of natural alternatives. The diversification is so great that we cannot speak of dependence on technology as something unequivocally bad, although, of course, it always poses certain threats.

NI     You talked about the need to understand kinds of thinking other than the human, which reminded me of the current calls to abandon the anthropocentric framework to create space to address complex planetary problems like climate change. How can artificial intelligence help us do this?

PF     Artificial intelligence gives us access to more complex phenomena, more robust data, in which it is able to find patterns better than humans. But our intelligence, human intelligence, is also artificial because we have been adjusting, expanding and modulating it forever through media and various tools. Technology is, in a manner of speaking, enlarging the workbench of our consciousness. We delegate various abilities and capacities to gadgets, machines, and thus they become involved in the circuits of our thinking, they become part of us. And this is where our so-called natural intelligence is artificial from the very beginning because it is developed in the aforementioned prosthetic way. According to German media studies, the media are the body of thought – that is, thinking is not located in the brain. We could start with Bergson's materialism, according to which the mind is in the things around us and also in the brain. In short, it cannot be located in one place and arises by coupling the brain with various phenomena. Thus a hammer, a ladder, a notebook, a computer or a ploughed field are all part of thought. Artificial intelligence brings something new to this evolution – semi-autonomous subjectivity, and thus the non-human ability to think and make decisions.

NI     I remembered Spike Jonze's film Her, in which the main character forms a relationship with artificial intelligence. The film does a good job of illustrating the confusion that comes from the fact that an AI can behave like a thinking entity that has both a personality and emotions. Why are we reluctant to accept this, and can we find such a notion inappropriate or even frightening?

PF     I think it's quite the opposite. What people imagine as the threat or promise of AI comes from anthropomorphic ideas – that is, we expect AI to be able to understand what humans are about, to comprehend or even show emotions.

I think it is much more difficult to imagine an AI that is capable of fully-fledged thought without showing emotion. This is due to the post-Cartesian claim that the mind cannot be separated from the platform on which it occurs. In order for a computer to manifest emotion as its functional component, and not just as a simulated facade, it assumes that its body will do so as well. But if it has a different body, intelligence and sensitivity will manifest in a different way. The separation of thinking and feeling is not sustainable. Consider, for example, David Hume's conception of emotion as a starting point. According to him, the exercise of reason is just a very specific, stable and calm emotion. Indeed, if it were not, it would be strange that logical, elegant solutions arouse emotion in us, even though we think of them as purely rational products. Instead of emotion, I would look for something in machines that we do not understand, that is new to us and that requires new concepts, grasps and approaches. It is a challenge to think of sensitivity as a computational problem and not as an expression of emotion.

NI     In one of your lectures, you mentioned the philosopher Erich Hörl who speaks of artificial intelligence as "the fourth blow to human self-love." What does he mean by that? 

PF     It is a reflection that follows the 1920 text Introduction to Psychoanalysis, in which Sigmund Freud defines the three blows of human self-love. The first is the Copernican one, in which humans realise that they are not the centre of the universe. Then the Darwinian, when they realized that they are not exceptional compared to the rest of the fauna, and the third is psychoanalytic, when they come to the realization that "I am not the master of my house." Hörl adds a fourth, the cybernetic blow, consisting of the fact that humans do not have a monopoly on thought and that it can take place on platforms other than the human body.

IMAGES CAPTIONS

1–2 | Palo Fabuš, images generated by Nikola Ivanov and AI system DALLE 2, 2023
3 | Palo Fabuš, Photo Nikola Ivanov, 2023

Nikola Ivanov

is an intermedia artist whose works focus on time, memory, biopolitics and sleep. He has participated in numerous solo and group exhibitions in the Czech Republic and is the editor of the anthology Odpočinek v neklidu. He is currently studying a PhD. programme at Prague’s UMPRUM; his artistic research focuses on the relationship between modernity and the colonization of nighttime.

Palo Fabuš

has long been interested in the relationship between digital technology and the human condition. He studied Computer Science and Media Theory in Brno and Sociology in Prague; between 2014 and 2015 he took an internship at Ruhr Universität in Bochum, where he studied contemporary German media philosophy. He is currently assistant professor at the Centre for Audiovisual Studies at FAMU in Prague. Since 2011 he has been the editor-in-chief of the magazine Umělec. He has also published in Literární noviny, Flash Art, A2, Furtherfield, Vlna and Mediální studia.