Future False Positive is a warning about the limits of modelling reality. It’s a vision of a world forced to correspond to a necessarily inadequate model of itself and a demonstration of a technology stack and epistemic mode that’s shrinking the horizon of possibility.
In After the Future, Franco ‘Bifo’ Berardi documents the dying days of the future. Formally incubated in utopian promises and cemented in an assumption that “things are always going to get better” - both in now decline, our future, Berardi argues, has come and gone. Neoliberalism and contemporary cultural production have primed us for the logical limitations found in a technology stack and epistemic mode that’s becoming to dominant.
In the Nooscope Manifested, Vladan Joler and Matteo Pasquinelli locate the logical and political limitation of artificial intelligence in its inability to deal with a unique anomaly. The process of training AI establishes no causal understanding of the observed phenomena and thus inherently lacks the capacity to predict or generate something statistically unlike that which it has already “seen”. When AI predictions are manifested in the real world, when humans interact with them, they produce patterns of, and condition, behaviour.
Separately, in New Dark Age, James Bridle argues that AI is not capable of discerning between reality and its model of it, and that as deep learning becomes the dominant epistemic mode of our time, we’re losing that ability too. The claim has already been made that the scientific method has been rendered obsolete since correlation has now superseded causation.
Taken together, a reality is rendered where all novelty has been foreclosed because the tools we have become reliant on for interpreting the world around us are reliant on that world remaining the same.
Future False Positive seeks to visualise these entangled externalities. The work consists of six-channel video clips of autonomous vehicle training footage extended into the ‘future’ using state of the art generative machine-learning techniques.
The work leverages a Next Frame Prediction (NFP) algorithm and the Pix2PixHD architecture. Trained on a dataset of video frames, the NFP algorithm builds an internal representation of the image’s graphical content and how they change sequentially. Once it’s trained, the NFP algorithm attempts to predict what the next frame would look like given a prompt. By feeding the generated image back in as a prompt the algorithm is able to generate video.
For Future False Positive, an NFP model was trained on a dataset of video footage from each of the six autonomous vehicle camera channels. To produce the videos the algorithm received the final image of each training clip as its initial prompt. After each clip ends the algorithm takes over and “the future” is rendered from the past.
Autonomous vehicles were once a fixture of the science fiction imagery that fulled the utopian imaginary that Berardi argues is absent in contemporary culture. As they materialise via Silicon Valley techno-libertarianism it’s not lost on Berardi that this class were the last to capitalise on final utopian project, 90’s cyberculture.
The self-driving car is also, at least in the public imaginary, one of the clearest manifestation of AI in the real world and an application where an AI’s inability to deal with unique anomalies has already resulted in the loss of life.
The generated video in Future False Positive consists of nothing but recycled patterns of pixel assemblages. As each clip progresses and moves further from the original frame the shapes and images produced become ever more repetitive.
In Future False Positive we can see Joler and Pasquelini’s ‘dictatorship of the past’. But it also points towards Bridle’s issue of using these tools to understand the world. Even as all predicted ‘futures’ melt into the same reconfigured pixel data, the object detection algorithm continues to classify with confidence.
The more faith put in an artificial intelligence system the more hyperstitial its predictions become. That is to say that the results themselves become manifested by their own prediction. The further AI systems creep into the governing of our reality, the more our reality will become conditioned to look like their predictions.
This work was commissioned by the Indeterminacy Group at the University of Dundee as part of NEoN Digital Arts Festival