Yago de Quay: I Control Music Using Gestures And Brainwaves

This high-tech renegade is setting his sights on a new performance industry.

_DSC3246.jpg
 

Yago de Quay, BREAKDOWN (clip) (2014). Courtesy Yago de Quay.

It's a tough world for artists with interdisciplinary visions. Though most creative industries are starting to adapt to (and adopt) new ways of working and thinking to meet the bleeding-edge, pathways to blending disparate fields together remain few. One such artist is Yago de Quay, a technologist, musician, director, and performance artist all wrapped up in one. 

“I didn’t just want to be a performer," de Quay told me in a recent interview. "I thought that I wanted to be much more of a director. I saw that the convergence of advanced technology and music was not being explored at Berklee [College of Music].”

This led de Quay to forge his own path—and to expand his vision, he pursued a Master's degree in interactive music and sound design at the University of Porto in Portugal straight out of Berklee's gates. There, he began to implement cameras, computer vision, machine learning, fellow dancers, and performing artists, along with other supporting components, into his performances. de Quay says that pushing technological innovation in music production has become his modus operandi ever since.

 
 

Yago de Quay's Interactive Audio-Visual Show for Intel. Courtesy YouTube.

In time, companies like Toyota and Peugeot started to tap de Quay to design tailor-made music and events that showcased their products. In 2016, for example, de Quay wove high-tech computer gesture technologies into his work to produce sound with movement for Intel, using the company's Curie-based wristbands and RealSense cameras to augment the experience with unique interactive audio-video sets.

Recommended: When Apparel Responds To Motion

“The projects started being picked up by companies because they were interested in being associated with performing arts at their events," de Quay explained. "But also, they wanted to incorporate their technologies [and] visions of the future.” 

Following these projects, de Quay realized that one of the challenges he wanted to resolve in his work involved quantifying human movement. “There was a bunch of technologies out there but none of them were actually very easy to use," de Quay said. "They were either for film studios, so you have to have these big rigs and markers all over your face. Or there were cameras that suffer from interference the second you put your hand in front of the camera. It just doesn’t see anything. So there was never a good solution for quantifying these movements."

From creating a device that collects data on human movement, to the possibilities of brainwave-computer interfaces, see what de Quay had to say about his work and the emerging industry he's operating from in our interview below. 

1. How has your past work brought you to what you've been up to now? What was the catalyst behind the idea of your new project?

Yago de Quay: My past work was at the intersection of the arts and technology. More precisely, my performances used new technologies to control music using gestures and brainwaves. For the past forty years, the tech-savvy artistic community has been hacking consumer electronics in order to re-purpose them. Since the mouse and keyboard, there has been no new reliable and precise controller for creatives to manipulate digital content.

Sidekick addresses that gap. It’s a wearable device for controlling media using gestures. It's a tiny wearable that emits a signal, and then any space that would have these very small devices—about half the size of your phone—can listen to it and be like “I got this at this time” and “I got this at that time,” therefore he is at X-amount of distance. It is that simple. Pretty much just time of flight for the signal being sent by the wearable. So any space that has this would immediately register the wearable coming into the space, and would know exactly what this wearable is.

2. What could be the result of Sidekick if it were to be put in the hands of creatives?

The objective of this is to open up a market of apps that are based on human data. So the device itself would open up all kinds of movement information about the user that has never been available before. And so the interesting thing is what kind of apps can be created by people using this technology. Like, the cool thing about the iPhone is not that it can make calls—that’s nothing new. But the apps you can install on it are what’s new.

So it will be a platform. It’s a piece of hardware, but it will also be a platform for people to install all kinds of artificial intelligence and applications that anyone can download and immediately get insight into their movement. It can be for health, for sleep, for controlling software applications, for controlling a drone, for virtual reality, etc. Anything that moves or that can move will be able to be studied much more closely, be able to be monitored much more closely, and will be able to open up its possibilities to being controlled or explored. The impact of Sidekick is measured by the creative results that flourish from its use. Made by creatives for creatives, it's a radical departure from the chair/desktop interaction model.

 
Brainwave Show.jpg
 

de Quay wearing an EEG headset that transforms brainwaves into vivid patterns on a holographic screen. Courtesy the artist.

3. What are the greatest obstacles/roadblocks you've encountered in trying to bring Sidekick to life?

We’ve just gotten started. We’re now assembling a team that is suitable. It’s challenging because I need to find somebody that’s very good with this leading-edge technology for indoor positioning, which is Ultra wideband. I started working with it in my academic studies, but I need somebody hardcore at it. So I’m trying to find that right person to push the boundaries of what is currently possible.

I have some experts in mind that I will be talking with next month. They’ve worked with some of the radio positioning technologies that are critical to this. And if that goes well, it will go a long way in terms of making a team that’s going to create the first prototype that is set to come out next summer.

Sidekick's technological innovative edge rests on a very recent radio trilateration method. We are still working on increasing accuracy and reliability, mostly by experimenting with sensors fusion and dead reckoning algorithms.

4. Are you switching gears to address needs within the field of multimedia entertainment, or will you continue your entertainment work?

I will always make time for producing multimedia performances, and we can implement a lot of the Beta and Alpha prototypes at events because I still do a lot of entertainment with technology at events. And as I’ve said, there’s always been this need for this particular motion-tracking device that we just don’t have. So I can start implementing those at these events as a proof of concept.

I also wrote a chapter in my PhD dissertation dedicated to innovation fueled by artists. In that chapter I proposed a new source for innovation called Live Product Development. This is meant to extricate developed teams from insular Research and Development environments. A practical methodology, it proposes continuous artistic experimentation during a product's development phase. By implementing Live Product Development, companies can speed up product development and discover new applications leading to possibly millions of dollars in savings and new markets.

5. What is the next step, so-to-speak, after Sidekick is produced? What do you foresee as the next frontier in this field?

I envision a future where humans and digital media interact as naturally as a person among his friends. Long gone will be the days where humans hunch over keyboards and mouses to look at tiny screens. The world can be a virtual canvas, and we, its painters. I foresee communities seemingly superimposing video, music, and images onto physical spaces to create a augmented and information-enriched cities. To reach this enhanced ecosystem, we first need to boldly develop natural interfaces that leverage our natural body language and speech.

Brainwave-computer interfaces will also be the next big interaction paradigm. This field is advancing at a tremendous speed due to recent investments in prosthetic limbs.

 

Author: Evan Berk

Editor: Rain Embuscado