Adversarial Hacking in the Age of AI: Call for Proposals
Adversarial Hacking in the Age of AI: Call for Proposals
29 – 30 January 2020
Location: Berlin, TBA
What does it mean to hack artificial intelligence? How can hacking techniques disclose the workings of AI and produce new knowledge and awareness about it? In January 2020, transmediale festival and the KIM research group at HfG Karlsruhe will organize a research workshop to explore AI from the perspective of its limits and vulnerabilities in order to study not just how AI works but also how it fails. PhD students, artists, scientists, mathematicians, hackers, and activists are invited to work together and map how new forms of hacking are emerging in the age of AI.
By “adversarial hacking in the age of AI” the workshop refers to facing and challenging a new and complex construct: the statistical models at the core of machine learning. And it specifically asks: What is the statistical model of a neural network or a support-vector machine? How can its biases and vulnerabilities be located? What does it mean, ultimately, to hack a statistical model? Hacking is understood as a question of knowledge production and as a methodology to break into the technological abstractions of AI. Participants are invited to study how AI systems fail but also how they learn from errors. The aim is to explore how a black-boxed system becomes less opaque, and therefore more vulnerable, but also how it ultimately absorbs and integrates faults, errors, and hacks for its optimization.
“Hacking AI” is already here though, considering the issue of so-called adversarial attacks. Adversarial attacks are an instance of how a machine-learning classifier is tricked into perceiving something that is not there, like a 3D-printed model of a turtle that is classified as a rifle. The computer vision embedded in a driverless car can be confused and not recognize street signs. Artists Adam Harvey, Zach Blas & Jemina Wyman, and Heather Dewey-Hagborg have utilized adversarial processes in their projects in order to subvert and critically respond to facial recognition systems. But this is not just about computer vision. Scientists in Bochum, Germany recently studied how psychoacoustic hiding can oppose the detection of automatic speech recognition systems.
There might be immediate security concerns with these kinds of exploits of course, but one needs to acknowledge these attempts also as subversive opportunities raising awareness. Hacking has always been a way of creating new processes by breaking old ones. For example, “adversarial stenography” can be used to send encrypted private messages that look like neural network art. What other subversive and disruptive uses of AI can be imagined?
To work around these questions, we take the term “adversarial” as a new metaphor for a political agenda of the network society. This society has abandoned any hope in the horizontal structure of digital networks and finds itself transformed into a source of value for the normative regime of neural networks that monitor biometrics, perform facial recognition, and track consumer behavior.
We invite proposals from PhD students, scholars, independent researchers, scientists, artists, hackers, and activists. Research and practice at the intersection of critical media theory, digital humanities, history of science and technology, computational art, data science, statistics, and hacktivism are all welcome.
To apply, please send the following materials as one PDF document by November 10, 2019 to email@example.com.
- Your name and affiliation; a brief biographical overview of your study, work, or practice; contact details; your city of residence; and links to your work.
- A statement of maximum 300 words expressing why you want to attend the workshop and how you will contribute to it. Please specify if your contribution will be based on/tell us about what you would bring to the workshop: a work-in-progress, a paper, artistic research, writing or critical computational project. Please note that it must relate in some way to automation, AI technologies (deep learning, perceptual and cognitive processing including image recognition, speech recognition, natural language processing) as a social, statistical-computational, and political system.
We encourage people with positions in universities or civil society organizations to organize their own travel and accommodation in Berlin. There is a limited budget to support train travel for PhD students and independent researchers, artists, and activists within Europe in case your institution does not cover travel expenses. Please indicate this in your application.
About KIM research group at HfG Karlsruhe
The research group KIM, Künstliche Intelligenz und Medienphilosophie (in English, Artificial Intelligence and Media Philosophy) was initiated by Prof. Matteo Pasquinelli at the Karlsruhe University of Arts and Design in 2018. In 2019, it received a start-up grant from the Volkswagen Stiftung for a project to examine how learning machines “see” the world and how humans see the world anew through these machines. The research network involves KIT Karlsruhe, Potsdam University, Leuphana University Lüneburg, and transmediale as event partner. Website: kim.hfg-karlsruhe.de
transmediale creates a space for critical reflection on cultural transformation from a post-digital perspective. For over thirty years, the annual festival for art and digital culture has been bringing together international artists, researchers, activists, and thinkers with the goal of developing new outlooks on our technological era through the entanglement of different genres and curatorial approaches. In the course of its history, transmediale has grown from its beginnings as VideoFilmFest to one of the most important events for art and digital culture worldwide. Beyond the yearly event, transmediale is a transversal, dynamic platform with a vibrant community and a strong network that facilitates regular publications and year-round activities including commissions and artist residencies. Website: https://transmediale.de
The workshop is supported by the Volkswagen Stiftung.