This work mainly produce from GADARA. (If you want to know more about GADARA, please read this Japanese article)

I would like to write about the experimental performance I presented at MUTEK.JP in 2019, which combines natural objects with music generation tools using AI. I would like to explain why I decided to create such a work, and the history of its creation and the possibility of new musical expression using AI.

Exploring the design of future tools using natural objects.

We, at GADARA, have been interested in the emotional value of natural objects for a long time, and have tried to adjust the brightness of the lights and the volume of the speakers by moving the stones.

Hypotheses for exploring new relationships between humans and tools
We can discover new behaviors and emotions in people through a product design that fuses the characteristics of natural objects with their functions.

I found it interesting that the form and texture of natural objects and the individuality and sensitivity of the person who touches them can change the way they are received, such as, “For some reason, I like this stone”.

So, by focusing on the theme of music and giving natural objects the ability to play like music, we search for techniques and attractive sounds that cannot be found in existing products. We began to explore the interaction between “individual differences in natural objects” and “the personality of the operator”.

Here is the first prototype I made.

Gyroscopic sensors for sensing movement are embedded in stones and trees, and the height of the warbler’s sound changes and the playback speed changes in accordance with the movement. It’s a pretty rough prototype, but that’s where we always start.

In order to create a musical expression, we also made a prototype that works with a music software called DAW.

However, it’s a simple mechanism in which spinning a stone creates an effect on the sound, but it’s strangely fun to touch just because you can operate it from a simple object called a stone. This stone is also easy to turn for some reason.

It’s a long digression, but here’s a sketch of the final experience I was thinking about.

The entirety of the music is coordinated by the person with the master stone (M) and consists of a composition of players (L,R) who change and generate the tone at will. I won’t go into depth because I’m going to lose track of what I’m saying myself, but the performer can easily handle even those with no knowledge of music, thanks to the constraining tool of natural objects. In other words, I thought it was possible that someone who had been an audience member could participate in the act of playing music.

The process of discovering how far the expression of music can be achieved through the use of casual tools, such as natural objects, while prototyping is the fun part of interaction design itself.

画像1

I’ve been fantasizing about picking up a stone, embedding a sensor in it, using it as a musical instrument, and returning it naturally when I’m done using it. This series of prototypes were made at a place called 100 BANCH.

The possibility of new musical expression with AI Music

From here, we will explore the possibility of using AI to express music in new ways. I applied to the AI Music Lab held at MUTEK.JP last year, and I started to work on the theme of AI Music.

“MUTEK.JP is an internationally renowned art and cultural organization that aims to develop creativity in digital creativity, electronic music, audio-visual art, and to promote cultural and artistic activities.”
“The idea is to keep the mutations in music that have evolved through technology at the forefront while continuing to search for a world that is in dialogue with music and technology.”

MUTEK.JP was a new experience for me as it was a different world where many leading artists from overseas were gathered.

First of all, I participated in a workshop for about two weeks, where I learned about AI and introduced the tools.

画像3

A scene from the workshop

From there, they apply AI to their own work and derive possibilities for musical expression. We decided to use a tool called SampleVAE, which was developed by Qosmo Inc.

For SampleVAE, I think this article by Max, the developer, will be helpful. For example, by having the AI learn about 10,000 drum samples, the AI can generate similar sounds from the sound sources that are input to the AI, and it can distinguish between drum sounds and snares.

What would happen if we input the sounds of hitting rocks and natural objects, which the AI doesn’t have in its learning data, and generate the sounds of drums and kicks that the AI thinks are generated from those sounds?

The challenge was to combine the unique acoustic properties of natural objects with AI in a performance.

 

It was full of smiles, but the sound of hitting rocks and shells was given to the AI this time, and it became a noisy, deep sound. (After the performance, one of the people who saw it said that the banana sound was the best.

In this way, I feel that I have found a new possibility of expression by taking on the challenge of sound generation by AI and combining the “sound of a musical instrument” modeled with the “sound of a natural object” or something different.

Also, I think that the mutation of AI × MUTEK.JP × Interaction Design Unit is what led to this work. I would like to use this as an opportunity to collaborate with more and more people and raise the level of the AI x Music x Interaction field.

This sound generation tool is now available on github. Try it out if you like.


Leave a Comment

Your email address will not be published. Required fields are marked *

CAPTCHA