|
To accomplish this, the students applied three core cybernetic concepts to the functioning of ChatGPT. The first was feedback: each question posed by a user constitutes a return of information that allows the system to adjust and refine its responses in real time.
The second was information processing: ChatGPT receives, filters and analyses each input before generating a response calibrated to the nature of the question, whether professional, personal or emotional. The third was auto-regulation: the system continuously adapts to each individual user, adjusting its outputs based on context, criteria and the history of previous exchanges. To make this visible, the students constructed a life-size human mannequin built from five cardboard panels, cut, assembled and painted entirely in black. The choice of human scale was deliberate, so that visitors would see themselves reflected in the figure. The mannequin's eyes reproduced the ChatGPT logo, referencing the concept of integrated prosthetics and the idea of an augmented human, or cyborg, in which artificial intelligence becomes part of the body's own system. The figure was named Gauthier-Patrice, a name chosen by ChatGPT itself when asked what human name it would select, a detail the students used to illustrate the blurring boundary between human and machine already identified at the Macy Conferences between 1946 and 1953. |
At a time when digital tools have become central to everyday life, a group of students chose to examine one of the most rapidly growing of them all. Since its first version in 2022, ChatGPT had surpassed 200 million active weekly users by August 2024, establishing itself as both a personal assistant and an economic force, with French AI company Mistral AI valued at nearly 6 billion euros in June of the same year. The students' objective was twofold: to demonstrate the concrete impact of ChatGPT on daily human activity, and to prove that Norbert Wiener's cybernetics theory, developed in 1947, remains entirely applicable to the systems of today. On the day of the exhibition, the installation functioned almost as a piece of sociological research. Visitors were invited to pose a question to ChatGPT on a computer placed beside the mannequin, optionally adding a personal parameter such as their age or professional situation to observe how the system adjusted its feedback accordingly. The questions and answers were then transcribed onto post-it notes and stuck directly onto the mannequin's body as the exhibition progressed. The range of questions proved telling: some visitors asked about politics, others sought relationship advice, and many addressed the tool with the same casual brevity they would use in daily life, asking what to eat for dinner or how to dress the next morning, demonstrating not only familiarity with the tool but a degree of dependency that the students had set out to examine. The students also raised broader questions about the direction of this trajectory. Wiener himself had understood the risks embedded in his own science, breaking with the American military and publishing a public letter denouncing the irresponsibility of those who weaponised his research. The students noted that today's AI systems, including ChatGPT, are capable of what is known as hallucination, producing responses that appear plausible but are entirely fabricated, a reminder that everything created by humans is inevitably subject to human error. In the end, the project conveyed that the man-machine boundary is not a fixed line but a threshold that continues to shift, and that navigating it responsibly remains as much a philosophical question as a technological one. |