EMAP resident artists´ interview - PART 1

27.03.24

In the first part of our interviews with our EMAP artists we will focus on their background and their collaborative models.

kairus_ikea

INTERVIEW PART 1

Andreas Zingerle

Since 2020 you hold the director position at ‘mur.at’, an associaton that is known for promoting open source tech and infrastructure for the arts & culture sector, besides offering artist residencies, organising worklabs and producing the monthly radio program ‘Netzrauschen’ at ‘Freies Radio Helsinki’. What is the philosophy behind your activities and objectives at ‘mur.at’? Why do we need open source technology?

Andreas: Since 1999 ‘mur.at’ operates a server farm in Graz (Austria):.

The NETWORK ‘mur.at’ is a virtual, constantly expanding platform of artists and cultural workers from different sectors for the development and promotion of network culture, Web-Art, Sound Art, Software Art and Digital Art in general.

The INITIATIVE ‘mur.at’ is committed to technological development using Free/Libre & Open Source Software. Members can have email addresses, run their newsletters, host their websites, have access to Nextcloud-storage, VoIP Videochat with BBB and jit.si, just to name a few. Besides that often members need tech support to run their own services on virtual machines,
e.g. the Cyperformance Tool “Upstage” for live online performances by Helen Varley Jamieson. Media diversity, unrestricted flow of information and transparent knowledge transfer form the core content parameters of the NETWORK INITATIVE. This knowledge transfer is for example achieved through skillsharing workshops where I received for the association a ‘work 4.0’ digitalisation grant. Thanks to the grant we can offer now workshops to the already mentioned tools and furthermore offer safe spaces through specifically tailored workshops for our female and diverse members. All tools are available to all our members and we provide and show that there are open source alternatives to the big 5 Tech companies such as Alphabet, Amazon, Apple, Meta, and Microsoft whose business models rely on collecting, profiling and re-selling user data.

With our artist in residence program we offer 2 months on-site residency in Graz, working very closely with the operative team, developing their projects on mur-servers and shaping the worklab, an extended un-conference weekend that happens at the end of the residency. Past residents have been Ricardo Ginès from Tactical Tech Berlin or César Escudero Andaluz with his work Metamanteros. The worklab creates a temporary space in which local artists, technicians as well as internatoinal guests and experts discuss topics from the field of digital technology, art and their social implications. People from different fields and with different backgrounds meet and work together on ideas and drafts that become the starting point for future projects, publications and productions of the current and coming year. In this heavily influenced work-in-progress working environment I am really happy that we do the monthly radio program ‘Netzrauschen’ at ‘Freies Radio Helsinki’ where we interview digital and net artists on their artistic background and ongoing projects, document mediaart festivals or give our artists in residence the possibility to reflect on their processes.

 

Linda Kronman

You have just finished your PhD research in the Machine Vision in Everyday Life project at the University of Bergen (NO), focused on art works which reveal biases and other issues in the use of Artificial Intelligence (AI). Could you tell shortly about your research results by using some art works as examples in order to describe them?

Why do we need AI-feminism?

Linda: I want to highlight two things to why we need AI feminism. Firstly, it provides a framework to ask critical questions of power such as: Who gets to decide how AI systems are designed and deployed? Or who benefits form them? Secondly AI feminism can guide us in rethinking how AI designed, deployed, and represented.

In the past years we have witnessed an explosion in AI systems. During this AI- summer, or AI hype – when a lot of interest and fundings are invested to develop systems – there has also been a rise of critical voices looking into how these technologies impact our lives. Feminist scholars, artists and activists have taken a central role in calling out the AI industry for designing and deploying AI in ways that undermines social justice and harms those already marginalized in society. Here it is probably good to mention that both art and scholarly AI feminism stems from intersectional feminisms that draws on black feminist thought in the 1970s and 80s. One influential AI feminist who has addressed problems with facial recognition technologies through intersectional feminist is Joy Buolamwini. For example, her artwork AI, Ain’t I a Woman, is a ‘viral video poem,’ showing how gender classification products repeatedly misclassify iconic Black woman as male.

Such artworks that test and evaluate AI I have come to call ‘artistic audits.’ Buolamwini, who is also a computer scientist, has together with AI scholar Timnit Gebru published the seminal research, Gender Shades. This study delivered evidence that popular facial recognition technologies, that performed with high accuracy detecting male faces with light skin tones, was considerably less accurate in detecting women with darker skin tones. In Coded Gaze, another viral poem from Buolamwini’s addresses her own experience of facial recognition technology failing to detect her dark-skinned face before she wears a white mask. In this artwork Buolamwin addresses the lacks diversity in training datasets as one reason why facial recognition technology work well in detecting some faces yet fails to recognize others. This is type of representation bias can be fixed by adding divers examples of images into datasets. However, all biases in AI cannot be solved with technical fixes. More accurate AI cannot solve histories of structural discrimination, rather data driven AI amplify histories of oppression. A great way to get familiar with problematics around AI is by watching Shalini Kantayya’s documentary Coded Bias. Its outsets from Buolamwini story and features interviews with prominent AI ethics scholars.

Trevor Paglen is another prominent artist addressing biases in machine vision. Together with AI researcher Kate Crawford, Paglen curated the Training Humans exhibition that showcased photographs from several datasets used to develope computer vision. The exhibition problematizes the origins, taxonomies, and logics behind AI that classifies humans based on appearance. This exhibition was accompanied by an essay called Excavating AI, communicating to a wider audience the importance of studying the datasets our AI systems are trained on. Other impactful engagements with training datasets are for example Adam Harvey’s exposing.ai and Holly Herndon & Mathew Dryhust, Have I Been Trained tool are both artist-led projects addressing issues of privacy and copyright involved in assembling datasets.

Our artwork Suspicious Behavior contributes to this body of artistic-research which can be positioned ascritical dataset studies. Taking the form of a speculative annotation tutorial this artwork takes a slightly different approach to datasets by investigating practices of curating video datasets that are used for training AI powered surveillance cameras to detect suspicious behaviour. At the same time Suspicious Behavior addresses power inequalities in how annotation work is outsourced to so called click workers. This type of labour involving labelling and segmenting images is typically hidden happening on the backstage of AI and is considered as a form of AI colonialism.

Because data is the foundation of AI, it matters how data is collected, classified, and curated into datasets. As AI-artist Hannah Davis has expressed it: A Dataset is a Worldview. This means that data is never raw, it is always in some way biased. A feminist framework can help us to account for this, for example the Data Feminism movement works with questions around data more broadly. Scholars and artists have recognized the importance to rethinking the whole life cycle of AI through a feminist lens asking if feminist AI is possible. One of them is artist Caroline Sinders with her project Feminist Data Set, another a collaboration between digital humanities and art called Full Stack Feminism. In addition to inequalities caused by AI systems, also representations of AI and those who work with it lack in diversity. Science fiction images of white humanoid robots are missleading representation of AI, obfuscating our understanding of the impact AI already has on society. AI feminist art is also needed to create more realistic representations of what AI is and to diversify imaginaries of those who work with it. Many of the 190 digital artworks in the Machine Vision Databasethat I have been analyzing for my PhD research do exactly this. Another project engaging artists to diversify representations of AI is the Better Images of AI project. And if I now managed to spark your intrest for AI feminism in these couple of paragraphs I recomend the Good Robot podcast to learn why feminism is needed in AI and in technology more broadly.

 

KAIRUS

How and when did you two meet and start working together?

Andreas: Back in 2008 I was studying at the Interface Cultures department (University of Art and Design Linz) and was using the chance to do an Erasmus Exchange to the TaiK Media Lab (now part of Aalto University). It took another 2 years before we founded the Kairus collective when we lived in Rotterdam (NL). From the beginning on we were working internationally with residencies: Subnet in Austria, Changdong Residency in Seoul (South Korea) and Redgate Gallery in Beijing (RoK). Our first artwork we did together is called ‘Re: Dakar Arts Festival’ and was shown in 2012 at Siggraph Asia in Singapore.

https://kairus.org/wp-content/uploads/2012/12/Screen-Shot-2014-02-07-at-5.32.49-PM.png

Linda: I have a background in Graphic Design and had worked with media production when I started to study at Media Lab and meet Andreas. Back then I was convinced that I wanted to become a 3D guru. However, starting to collaborate with Andreas took me on a more critical path of understanding what media is and how we use or abuse it. When I graduated with my MA from Media Lab I was certain that PhD research was not for me. But, the thought matured while collaborating with Andreas when he started to work on his art-led PhD in Linz some years later.

 

How does art and research combine in your work?

Andreas: From the beginning on our collaborative art practice has been research lead. Who takes the lead depends. Between 2012 and 2016 much of our art was based on my PhD research on Internet crime and vigilante groups who fight against scams and dubious scammers. In 2014 this research brought us to Ghana, specifically the # Agbogbloshie electronic waste dump in the capital Accra where we bought 22 hard-drives. Back in Linz (Austria) where we lived at the time we started to work with them in an artistic research project called Behind the Smart World in the framework of an Research Lab at servus.at. Now when AI tools like text and image generators and deepfakes are spamming the Internet our earlier research is very relevant again.

https://kairus.org/wp-content/uploads/2014/09/Screen-Shot-2014-09-08-at-1.58.45-PM.png

Linda: After 2016 we were living a couple of years in South Korea where we worked with data security and data ethics in the contect of smart cities. In the framework of a research project calledInternet of Other Peoples Things were already then working with machine vision technologies and surveillance with works like Insecure by Design, Panopticities, Sharing Locations and Future Past Still in the Making.

https://kairus.org/wp-content/uploads/2018/04/web_cameras1.jpg
https://kairus.org/wp- content/uploads/2018/02/KairUs_panopticity01.png

Machine vision and AI became the focal point in our artistic research when I started my PhD at the University of Bergen in the Machine Vision in Everyday Life projet. Therafter my PhD research has influenced the topics we choose. So, the Ideal Behavior emerges form my academic research, but then Andreas takes his own aproach on it, for example by looking at a lot of online- conversations on Reddit and influencer tips on how to beat the AI bots.

 

How do your talents merge together in Kairus, which roles does each of you take?

Linda: When we are working on a project we typically have a research phase and then it takes a while to set for the format. The conseptualisation of artwork very much emerges from conversations around the research. What we read in research papers, articles, videos, forums etc.

Andreas: We like to take these conversations to wider audiences in form of workshops, artist talks or lectures which influence us as we discuss how the public percives the topics and themes we are working on.

Linda: Or consult with experts in the field both researchers and those who engage with these technologies in different ways. When this happens and how it happens is usually dependent on funding and collaboration with different institutions.

Andreas: In the production phase we usually make use of our technical skills. Linda likes photo and video editing, I more interactive story telling and we both do graphic design. Often we take the chance to learn new applications or workflows. For example, for or previous artwork ‘Suspicious Behavior’ I took the time to learn using an open source tool for interactive storytelling called Twine. We try to use open source alternatives whenever possible.

Linda: However, we are also critically curious of emerging technologies asking how e.g. AI generative tools are impacting our every day lives. So, for ‘Ideal Behavior’ we want to try out different AI generative tools which are to a large extent proprietary.