Loading…
Attending this event?
To view sessions, please select the Grid view below.

After registering for the conference, you can log in here to save sessions to your personalized itinerary, sign up for workshops and performances with limited capacity, edit your profile, and edit your session description. For help using Sched, please see support.

For full details about the conference, please visit hastac2023.org
Back To Schedule
Saturday, June 10 • 11:15am - 11:40am
Prediction AR - An Augmented Reality Experience

Log in to save this to your schedule, view media, leave feedback and see who's attending!

INTRODUCTION
As an international emerging media artist, a PhD candidate and a former documentary producer, impact is the main focus of my work. I specialize in digital storytelling and critical making to spread digital literacy to call for transparency of artificial systems we cannot opt out of. This is the moment where we, critical makers, can affect ethical codes of transparency, equality and equity. Non-fiction digital storytelling should inspire collectivity, make artificial systems transparent, and have a strong impact on the redistribution of power in the public sphere, challenging existing structures through co-creation, collaboration and open source coding. Prediction AR is a gamified multi-user augmented reality experience, which discloses scoring and risk assessment systems (such as predictive policing, credit scoring, and even citizen scoring used in China, for instance) in a playful, yet realistic way. These scoring systems serve as the basis for future behaviour prediction, which is based on datafication, abstraction, classification and patterning which disqualify or omit certain data, and thus give rise to error and bias. “How data are conceived, measured and employed actively frames their nature” (Kitchin & Lauriault, 2014, p. 4).
What happens if our datafied selves become real and influence our future? What and who gets to be perceived and what and who gets to be silenced? This world’s centre and its margins are inverted. It is a world where the “noise” becomes the “signal” (Steyerl 2017), and your anonymized data will transform into particles forming patterns in a collaborative, open ever-changing audiovisual artwork. Welcome to a world where data is never raw, and art is always political.
CRITICAL DIGITAL STORYTELLING
I dove into the politics of algorithms and predictive policing in 2016 while I was working as a producer on the documentary Pre-Crime, which dealt with these systems. This work led me to co-create and produce the iOS app and a WebGL experience Pre-Crime Calculator (2017), where we used facial recognition and live data from different locations to predict the users’ heat scores (the potential of being involved in a crime). At the moment, I work on a voice AI, ethics in heath care project, where I am prototyping the first stage of this experience focusing on voice. In Prediction AR, I use similar mechanics as in Pre-Crime Calculator and the voiceAI project, combining facial and voice recognition through ChatGPT and a speech-to-text software (i.e. Whisper by OpenAI or Google Speech-To-Text).
I want to show how ‘easy’ it is to make an assumption about each of us once our digital selves are datafied and spit out through the predictive machine. At the same time, my aim is to motivate the participants to engage with these technologies through a critical but also collective lens. I will use the methods of critical making (Ratto, 2017), the practice-based method of creation and making (Gauntlett, 2018; Rapallo, 2019), and personalized algorithmic storytelling (Uricchio 2017, p. 198) to create this critical digital experience in augmented reality. The experience aims at demystifying predictive technology used in for instance health care, predictive policing, credit scoring for the public by making it playful and understandable. This gamified world is partly fictional and partly real: whereas the participants, their facial recognition data, and their locations are real, the gamified narrative is fictional. I call this the methodology of critical digital storytelling.
USER JOURNEY
You are guided by a synthesized voice over through a sonic and visual journey in augmented reality. You will interact with an artificial facial and speaker recognition system via your biometric data (face, voice) to create your enrollment biomarkers (the system extracts some of your biometric features), and answer questions through an implemented chatbot (using e.g. ChatGPT, Whisper API, Google STT).
The twist is that you can only enter the second part of the experience if your score reaches the "risk potential". Then you can access the second part of the experience and become part of a living collaborative digital artwork. This means that you must be profiled as “undesirable” to access the system (i.e.potentially dangerous because you are in a bad mood, your face is asymmetric and the system can't recognize it, and thus gives you automatically a high risk score, your voice shows traces of illness, you have an unrecognizable accent in English). The system must misrecognize and misclassify you. Thus, congrats to you if you are profiled today as “undesirable candidate” for a loan, or a job or a good insurance plan, and you might even end up on a heat list of the potentially dangerous just because you are in an “urban” area, you are in a bad mood, sick, you have a strong accent or a face defect! Trick the system as much as you can! But what happens if you are “positively” identified as you? Try to be somehow different: fake an accent, use facial-hacks, or go to a less affluent area in your city. If you succeed, you will enter the second part of the experience.
The resulting abstract artwork resembles GAN-based art (Dall-E, Midjourney, Stable Diffusion), but is contextualized and thus politicized through synthesized voice over reciting a list of fictitious prediction scenarios, which re-assemble every so often and create new forms and new texts symbolizing the instability of our datafied identities and the discourse of the algorithmic truth (using ChatGPT, Whisper, and a point-cloud data visualization software). The project addresses bias in datafication as the questionable neutrality of data. As Gitelman and Jackson put it, “raw data is an oxymoron” (Gitelman 2013, 2), data is never neutral, it is always already “cooked”, processed. Abstraction is implemented in the processes from the very start. Through abstraction humans try to control the past, present and predict the future. Through these predictions, they attempt to control society and to seemingly eliminate error. “How data are conceived, measured and employed actively frames their nature” (Kitchin and Lauriault 2014, 4).

Speakers
avatar for Michaela Pnacekova

Michaela Pnacekova

PhD candidate, York University
Michaela 
Pňaček(ova) is an award-winning XR artist, PhD candidate and ELIA scholar at Cinema and Media Arts at York University, Toronto. As Graduate Assistant at the Immersive Storytelling Lab headed by Dr. Caitlin Fisher, she’s worked on multiple prototypes focusing on human-machine... Read More →


Saturday June 10, 2023 11:15am - 11:40am EDT
TBA 207 Ryerson St, Brooklyn, NY 11205