By Brad Diamond
A Reactive Robotic Microphone
Would you like an acoustic swarm of tiny robots in your house, adapting and rearranging to make a complex and versatile microphone? With advancing technology in both machine learning and microphone technology, Professor Shyam Gollakota of the University of Washington and his lab of students created a self-dispersing, self-rearranging acoustic swarm. This swarm is able to dynamically react to a room’s environments and the positioning and number of people, improving audio quality and solving the Cocktail Party Problem.
The Cocktail Party Problem
If you’re like me, it can be easy to get overwhelmed and confused by a plethora of overlapping conversations, as the sounds become a cacophony our ears can’t disentangle. This is the classic Cocktail Party Problem.
When a group of people are speaking at the same time, they will often speak in overlapping frequencies and times. At a certain point, these sounds become mixed in the air before they get to an ear. Multiple people speaking at once becomes an indistinguishable mess for listeners, as the overlapping noises become direct acoustical interference for our understanding of speech.
Of course, lots of people can simply focus their attention on a single conversation. The human mind, in all its brilliance, can filter out extraneous noises, letting everything else fade away. So, what does the acoustical swarm do then?
The Acoustic Swarm
For audio-dependent consumer electronics, the Cocktail Party Problem is a big issue. Many microphones will end up recording the whole of the noise as one sound, making it impossible to figure out each separate audio source. The acoustic swarm fixes this issue by adjusting its physical spacing to capture each sound independently.
Each robot is essentially a mechanized microphone, able to dynamically reposition itself to get the best audio quality possible. Their compact size, at about 30 millimeters across, makes them an unobtrusive addition to any room. Furthermore, the hard work of Professor Shyam and his lab have produced operating procedures that make the robots entirely autonomous.
The acoustic swarm deploys itself from a central recharging unit, with each robot placing itself in a space to get the best acoustic pickup of a certain range. This network of robots records a large auditory area, with each robot focusing on a smaller range and therefore capturing different sources with more fidelity. The dynamicity of the robots allows the acoustic swarm to reorientate to different audio sources, capturing undisturbed audio at its source.
The key to this capture is the spacing between microphones, as Professor Shyam explains in the following clip:
Using the distributed microphone array of the acoustic swarm, which by its very nature is able to change its own spacing as needed, provides better localization using only sound. The array is able to separate different noises because of this localization.
The acoustic swarm separates different conversations into understandable conversations, even with these conversations happening in the same space. Additionally, the swarm can create mute and active zones for recording, giving even greater control over sound. When people find themselves overwhelmed by the Cocktail Party Problem, the acoustic swarm is there to fix the issue.
Hear the effect of the acoustic swarm in the following clip. Despite the close proximity of the speakers, the acoustic swarm separates their audio into two distinct zones.
Is there any solution that is better than the easiest option? The acoustic swarm, with its built-in adaptability for any location, is the easy option for conferencing needs. The swarm transforms any space into a much needed conference room, without having to spend the time and energy of setting up a series of microphones yourself.
The acoustic swarm turns a cacophony of sound into an understandable message that features every voice at the table. As spatial control of audio improves, the limits of audio and consumer devices change as new applications become possible. Innovations like the acoustic swarm are spearheading the change to the audio of the future.
Our next article takes another look into the future, once again with Professor Shyam. We take another visit to Professor Shyam’s lab, to learn more about the exciting work they have done with semantic hearing.
In the meantime, if you want to learn more about the acoustic swarm head to Professor Shyam's webpage. Or check out one of our Audio Classroom articles, like this one exploring broadside microphone arrays.