fbpx
Cyber Space

Exploring the workings of artificial intelligence through artistic creation is the ambition of the latest project launched by Sónar as part of the AI and Music S+T+ARTS Festival. Researchers and artists have worked together on these questions to create new performances. In this piece, Cécile Moroux exchanges with Anna Diaz and Javier Ruiz Hidalgo on "Engendered Otherness - A Symbiotic AI Dance Ensemble."

We are europe banner We are europe banner

Author: Cécile Moroux

Photo Credit: Alba Rupérez

There are more and more articles and analyses about artificial intelligence, algorithms, machine learning, blockchain, Web3 and NFTs. These concepts invade the news and unleash certain polemics: sometimes they are referred to as the next steps towards more democracy and knowledge, sometimes as the promise of ever greater alienation from new technologies, screens and mass surveillance. 

However, when you live in a different world, it can sometimes be difficult to imagine what you are talking about, and it can be difficult to imagine these concepts in a concrete way.In order to approach them from a different angle, we propose to explore the use of artificial intelligence in the creative processes of music, dance and visual arts. 

We have been following several ambassadors of this type of exploration for a long time, such as Refik Anadol, Holly Herndon or Mat Dryhurst. This time around, we are interested in one of the latest projects by the visual design studio Hamill Industries, which has been a We are Europe face since 2019, and whose multi-dimensional and collaborative aspect has piqued our curiosity.

In October 2021, the Sónar Festival team joined forces with the Universitat Politècnica de Catalunya (UPC), a scientific and technical university, based in Barcelona, to organize the AI and Music S+T+ARTS Festival. The programme included a wide range of performances, conferences and exhibitions exploring the uses of artificial intelligence in the arts, particularly in music.

The programme also featured 3 live performances, born out of several weeks of collaboration between small groups of artists, engineers and researchers. Among them, a familiar name: Hamill Industries.

Hamill Industries is a studio composed of Pablo Barquín and Anna Diaz. They are filmmakers, researchers, and multimedia artists. The core of their work consists of combining techniques (computer, robotic, video) to visually explore concepts from nature, the cosmos or the laws of physics. They have produced much of Floating Points‘ visual universe, such as the video for “Last Bloom” (on Crush, released on Ninja Tune), whose making-of reveals the talent and ingenuity of the studio.

For this collaborative arts-science project, the Barcelona studio teamed up with Puerto Rican dancer and choreographer Kiani del Valle and a small team of engineers from the UPC led by Javier Ruiz Hidalgo, also including Martí de Castro and Stefano Rosso, both PhD students. After several weeks of work, they were able to present “Engendered Otherness – A Symbiotic AI Dance Ensemble”.

The show opens with a poem by Belén Palos read by a metallic voice.

“ WE the cluster, the system, 

the metabolism 

PULSES and BEATS and SYNAPSES

We the part, the other,

container and contained

the LIMITLESS – the BOUND – the UNRESTRAINED

With Blood and carcass,

with shell and bark

WE SPREAD, WE BOND, WE BREAK, WE FLY

[….] ” 

The first images projected on screen evoke a terrestrial landscape of mountains and water, a mixture of peaks, green and blue, like a timelapse of fjords in the middle of summer. The image transforms to give way to microorganisms that resemble cells or bacilli. The shapes slowly change in layers of color, until they become a brilliant cluster, a luminous constellation that seems to react to the impulse of a heartbeat. Then the screen goes black again, and Kiani del Valle takes over. At first alone on stage, she invites the famous AI to take its place on the screen to accompany her movements.

In this project, the particularity of the approach is through choreography and movement. Here the symbiosis of the living is possible thanks to the intervention of an artificial intelligence, able to merge the movements of different organisms into a visual result.

Destroy the algorithms, mess up the organisms.” (Anna Diaz)

In order to better understand what was presented, we asked Anna (Hamill Industries) and Javier Ruiz Hidalgo (UPC) about the production and co-creation process of this performance, as it is first and foremost a research and collaboration project between different fields and knowledge, and reveals the topics of reflection and development possibilities around AI. 

What was the starting point of the thought process – of the project?

Anna – We wanted to study the behaviour of artificial intelligence and algorithms and especially to produce new forms of expression that we are not able to create as humans.

JRH – I work in the Signal Theory and Communications Department at UPC, in the image processing research group, and we are the first in our group to explore AI in the context of artistic creation. The tools we are developing allow us to analyse images from reality to generate images that are completely fictional. We train a system (AI) to learn the characteristics of an image. Based on these characteristics, it creates the image by itself.

We were approached by Sónar because they wanted to start this collaboration to see how AI could be integrated into artistic creation. 

How long did the process of collaboration and creation last, in what form and in what stages?

JRH –  The discussion started in May 2021 between Hamill, Kiani and us. By July we had a pretty clear idea of what we wanted to explore and of the way we wanted to use AI to create the show. On our side at UPC we dedicated August and September to developing the tool that Hamill could then “play” with throughout October to generate imagery from Kiani‘s movements, and create the images and videos used in the final performance. 

What are the roles of the people involved in this project? 

Anna – Our role with Hamill Industries was the selection of the organisms, the timelapse, the motion capture, and finally the selection of the images to give to the artificial intelligence. Basically we created and selected all the data to train the algorithm.

Kiani‘s role was to make movements to produce data, matter. She was one of the model organisms for motion capture. JRH and his team provided us with the tool that processed the collected data, in order to transform it into visual elements but also to allow the prediction of certain movements of Kiani for example.

There was really collaboration at all levels. Particularly for the construction of the tool, the definition of the needs (us as curator) according to the possibilities (engineers).

JRH – We selected several living models: flowers, fungi, insects, bacteria, jellyfish, and Kiani. For each model, we created a different system. Our role was to train these systems. Each system needs thousands of images for it to train, so it is able to extract visual characteristics of each living model and generate new organisms accordingly.

© Alba Rupérez

At what point does the creative role of the human come in? What about the AI’s creative input?

Anna – The AI here has three main roles: motion capture (collecting data, we’re talking hundreds of images of Kiani for example); creating visual models from this data; and prediction. 

From a movement and a large volume of data, artificial intelligence is able to interpret and give meaning to this data. From this interpretation, it renders a unique and novel visual model, and above all this rendering, or interpretation, is largely autonomous. The work of humans is concentrated in the first part of the process: the elements that the machine will learn, the images and organisms chosen as the basis for the machine’s data and learning, the way in which it is made to learn things.

JRH – The system is trained to look at the images. For example, let’s take the case of the jellyfish: you give a system hundreds of images of jellyfish and somehow the system is then trained to recognise the characteristics of those jellyfish images. Through the training, the system creates a mapping between the input and an image of a jellyfish.

You can think of the input as a point in a map. A specific point generates a specific image of a jellyfish, by selecting another point in the map another jellyfish would be created. The creative input form the AI is how it learns to go between one point in the map to another one. So if one jellyfish is associated to Barcelona (a point in the map) and another to Paris, what would be the jellyfish of Toulouse?

How does the result of the live show work? When you are standing in front of the show, what do you see?

JRH – The creative process for us as scientists is that depending on the information you give to the system, images are created and move in a certain way. Each movement of Kiani is associated with a value. So when Kiani moves she sends information (value) to the system, which produces an image from this value. Kiani‘s choreography somehow conditions the output of the images, because the images move according to Kiani. But this movement was not exactly the same.

The system creates images similar to Kiani’s but it has made its own interpretation. It might not understand the human body as we do. We didn’t teach it as a human, saying for instance that here is a hand, a head or feet. We only gave images so the AI system can learn a representation, a mapping, of it.

What were the assumptions that turned out to be wrong and the discoveries that came out of nowhere?

Anna – What is incredible about this project is that everything is a discovery and everything is unexpected. Everything that is produced by the AI is completely new and it is impossible for a human to reproduce it in the same way.

From the POV of your discipline, your profession, what have you learned through this collaboration? 

Anna – The change in the creative workflow. This project required us to move from, say, a monolithic human workflow to a non-linear, tree-like, exponential workflow, thanks to all the possibilities offered by AI. 

JRH – At that point, it was about applying an existing tool to an art project, it was not pure research. For me it was very interesting, because as a scientist I have a ‘square’ mind, a pragmatic approach. In our job, we always have to measure the relevance of what we are doing, to evaluate our results. But here we had no such measurement to do, and for me it was completely new. It did mean something, but the ‘quality measurement’ was not important.

At the beginning it was very difficult for me to accept an image of Kiani with three legs. It was impossible, like it was wrong! However, Hamill and Anna made me realize that it didn’t matter. On the contrary, the objective here was to observe how the network learns, and how it interprets Kiani‘s body. And for me that was a really new approach.

© Alba Rupérez

Is this project intended to be shared and taught?

Anna – Yes, the tool is open source. The idea is also to exchange with the students of the UPC. There is clearly an objective of sharing and transmission in this project. The idea is to allow a better understanding of these technologies through this type of creative application.

JRH – Yes, of course. We are a public university, we have a primary mission of sharing knowledge. I was very happy to see my students motivated by this project. It has even become an argument of the type: “If you understand everything during the course, you will be able to do something like this”. It’s extremely rewarding. It is not always easy to have a concrete and rapid application of what you develop in research. In this sense, this type of project is a real playground. 

What are the next steps in the project?

Anna – This is really the very beginning. It will take more time, and also more money to go further. To understand how far the creative process of artificial intelligence can extend. We would also like to develop the prediction part of AI.

The first thing will be to work on new models using our own images (editor’s note: in the first version of the project the models are built from images found on the internet) in order to add our own imagination and our own vision. So for us the next priorities are to work on the models with our own photographs and techniques and then merge these models of organisms and see what the AI is able to predict and achieve as images.

JRH – The next step will probably be to go further, integrating research to try to find new architectures, new systems, that better understand movement and dance.

When you train a generative system, a network capable of creating images, as we did, you give it images and the network learns by itself to reproduce them. However, it is possible to condition this generation of images. Some researchers work on systems that not only generate an image of a face, for instance, but they can ask the system to generate the face with glasses, or with long hair. 

We can apply this to our future show. For example, for the jellyfish, you can condition where the tentacles are, and therefore how they move.

Anna – Some updates and developments were presented at Sónar Lisboa in the form of a talk at the beginning of April 2022, and the project in its new form will be presented at Sónar Barcelona in June, but this time with Kiani and the new models created. 

Machine listening has become widespread and has been developed commercially (Siri, voice commands, etc). It also allows for very effective surveillance. If machines learn how humans and organisms move, have you ever thought about how far the application of such technology could go?

JRH – The moral issue is obviously something we have to take into account. Questions are already being raised with the development of an application like deepfake. This application substitutes your face for another, and it’s quite realistic. You can for example make Obama say whatever you want and it looks quite realistic. And of course it’s a violation of privacy. 

It’s something very similar to the technologies we use. As a scientist, this is my job. You create a technology and a tool and people start to use it in ways you haven’t even thought of. How do you stop that? It’s a difficult question.

But one thing we are working on as a scientific community is the issue of data bias, so that we don’t train systems with images that are not inclusive, that are discriminatory or reproduce prejudices for example. I have seen research that looks at this issue and tries to develop systems to avoid these biases. However, we found no way to avoid the misapplication of our research. 

Anna – Of course we are thinking about it. In fact, we work with these technologies because big companies have invested huge amounts of money to develop these algorithms. There is always a moral problem with these technologies: why do these big companies develop these technologies? What could they do with algorithms that can identify patterns? These questions are intrinsic to capitalism or even marketing. 

This is why it is interesting to use these algorithms outside the context for which they were created. It is a way of diverting them. If we can use these algorithms in a creative way, then we bypass the rule. These experiments made me understand how these algorithms work. 

Industrialisation has created tools that can be used to produce harm, to control, to create fear, to make us dependent on certain products, but when you get interested in machine learning and learn how to use it, then in a way you understand how some of the world works, you are a bit more aware.

© Alba Rupérez

Can this type of questioning sometimes lead to self-censorship in your research or explorations?

Anna – Yes. To be honest, before I got into machine learning, I was very skeptical about its use. Not only because of what we just talked about before, the control, the surveillance, but also because of the interest it could have. I think this fear came from a lack of knowledge about how it works, because in reality it is extremely interesting for my artistic practice. In fact, this technology allows you to understand your visual practice, your habits, your mistakes, your tastes too. 

Now that we are training this algorithm with our own images, we have become aware of our patterns (of color, shape, etc.). And technology becomes a tool to explore ourselves and see where we are in our practice.

But I think there are so many questions around areas like machine learning, mathematics, algorithms, that censorship and fear (at least in my field) has more to do with lack of knowledge than fear of “surveillance” for example. I’m a pretty skeptical person, and I think our phone is far more dangerous in terms of surveillance than some of the algorithms we can use to create.

JRH – Personally, I am a very practical person and I try not to think too much in the possible scenarios or ramifications of my research. I just try to solve the problems at hand and hope that the use would be the initially intended for the developed tools.

Can you comment on this quote? “The more we know about AI, the more we know about humans

Anna – I agree 100% with this idea. In fact, the more we know about AI, the more we know about humans. Why? Simply because machine learning and algorithms have been created by us. It is not some kind of alien intelligence that has appeared out of nowhere and invaded our world. It is humans who create the rules for the machine to learn to do and predict. And these rules that we create are based on our own functioning, our prejudices, the interests of our society, so it’s a mirror. 

For example, there is a lot of exploration in linguistics, in this field they use algorithms to understand whether the language has a gender bias or not, whether the language has a patriarchal structure or not: and it does! How can they learn this? Because they made the machines learn what we know, to understand the concept of gender. It says so much about us.

Another example: some algorithms have been trained with texts or images scattered on the Internet. They have become racist and very populist. And that’s because they learn from us. It’s not a moral entity, it’s a mirror of society and it operates according to the rules we dictate to it. So yes, the more we know about AI, the more we know about humans.

JRH -I would say that the more we learn about AI, the more we learn about everything. We can use machine learning processes to better understand how things move. By studying the movement of a jellyfish via AI, we can extrapolate the information beyond physics. This allows us to interpret information in new ways.

For more stories from We are Europe, sign up to our newsletter down below.

About the author

Cécile Moroux is a cultural worker since 2016. She worked for several independent organisations (music PR agency, crowdfunding and sponsorship platform, booking agency). She coordinates We are Europe since 2019 and is involved into different European projects carried out by Lyon-based association Arty Farty.

Lexicon

Artificial intelligence Techniques attempting to enable computers to act or to reason in ways that we would call intelligent.

Algorithm Finite sequence of well-defined instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing.

By making use of artificial intelligence, algorithms can perform automated deductions (referred to as automated reasoning) and use mathematical and logical tests to divert the code through various routes (referred to as automated decision-making).

Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as “memory”, “search” and “stimulus”.

Machine Learning is an artificial intelligence technology that enables computers to learn without being explicitly programmed to do so. However, in order to learn and develop, computers need data to analyse and train on. It is a modern science of discovering patterns and making predictions from data based on statistics, data mining, pattern recognition and predictive analysis.

Computer Vision refers to an artificial intelligence technique for analysing images captured by equipment such as a camera.

An AI-based tool capable of recognising an image, understanding it, and processing the resulting information. For many, computer vision is the AI equivalent of the human eye and our brain’s ability to process and analyse perceived images. The reproduction of human vision by computers is one of the major goals of computer vision.

Timelapse A timelapse is a video animation made up of a series of photographs taken at different times to show the evolution of the object photographed over a long period of time in a short time. It can be used, for example, to show the opening of a flower, the movement of the sun or the stars in the sky.

Motion Capture is an animation technique that records the movements of a moving object or human. Using sensors, these movements are captured by digital cameras and visually reproduced in real time on a computer.

Machine Listening literally means “automatic listening” in French. It is a discipline that involves teaching computers how to listen to sounds, whether they be music, speech or common environmental sounds.

Deepfake is a multimedia synthesis technique based on artificial intelligence. It can be used to overlay existing video or audio files onto other video files (e.g. changing a person’s face on a video) or audio files (e.g. reproducing a person’s voice to make them say made-up things).

This technique can be used to create malicious hoaxes The term deepfake is a crossword formed from deep learning and fake.

Share this content
All posts in Cyber Space