Can Your Cough Sound The Alarm On COVID-19? MIT Researchers Believe It Can.


It started as an AI framework for Alzheimer’s, now MIT researchers are adapting their machine learning algorithms to find asymptomatic Covid-19 patients. Early results are indicative of a breakthrough in the detection of the novel coronavirus, as well as a unique way to use AI to push forward the frontiers of healthcare. 

Brian Subirana is the lead research scientist on the AI project. He works at MIT’s Auto-ID laboratory, a center that is not only focused on the ‘Internet of Things,’ but also coined the phrase.  Subraina says that the work he and his co-researchers are doing on the sound of an asymptomatic Covid cough is enabling detection of the virus without the use of a swab or clinical test. “The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs,” Subirana told MIT News this week. “This means that when you talk, part of your talking is like coughing, and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s in fact sentiment embedded in how you cough,” Subirana says. “So we thought, why don’t we try these Alzheimer’s biomarkers [to see if they’re relevant] for Covid.” 

Early results of the research are promising. “We think this shows that the way you produce sound, changes when you have Covid, even if you’re asymptomatic,” Subirana says. Though not discernable to the common human ear, AI systems can be put in place to assess the subtle differences between a ‘Covid cough’ and a non-covid one. The research uses four biomarkers to determine whether a cough is due to the Covid disease state — vocal cord strength, sentiment, lung/respiratory performance, and muscular degradation. Initial results show that the model identified 98.5 percent of coughs from people confirmed with Covid-19. All of the asymptomatic coughs were detected. 

The team at the MIT Auto-ID Laboratory published their work in the IEEE Journal of Engineering in Medicine and Biology in late September. Their hypothesis was that asymptomatic COVID-19 subjects ‘could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence.’ The paper states that this research has assembled the most robust COVID-19 audio dataset on record. “To train our MIT Open Voice model we built a data collection pipeline of COVID-19 cough recordings through our website (opensigma.mit.edu) between April and May 2020, and created the largest audio COVID-19 cough balanced dataset reported to date with 5,320 subjects,” the paper reads.

Further, the researchers state that they developed an AI speech processing framework and “provide a personalized patient saliency map to longitudinally monitor patients in real-time, non-invasively, and at essentially zero variable cost.” Neural networks were used to push the frontiers of this research forward, training on 4256 dataset subjects and being tests on the remaining 1,064 subjects. “Cough recordings are transformed with Mel Frequency Cepstral Coefficient and inputted into a Convolutional Neural Network (CNN) based architecture made up of one Poisson biomarker layer and 3 pre-trained ResNet50’s in parallel, outputting a binary pre-screening diagnostic,” the authors’ write. “Transfer learning was used to learn biomarker features on larger datasets, previously successfully tested in our Lab on Alzheimer’s, which significantly improves the COVID-19 discrimination accuracy of our architecture.”

The conclusion derived from this research is that AI techniques can “produce a free, non-invasive, real-time, any-time, instantly distributable, large-scale COVID-19 asymptomatic screening tool to augment current approaches in containing the spread of COVID-19.” The paper advises that this technique could be implemented to screen students and workers as they get back to their daily lives. It also suggests that it could be applied in a public transportation setting or to other large groups that need regular testing and diagnosis of the virus.

The authors note that the science behind this research could be contained in smart devices to enable asymptomatic patients to be monitored for COVID within their own homes. “Ultimately, they envision that audio AI models like the one they’ve developed may be incorporated into smart speakers and other listening devices so that people can conveniently get an initial assessment of their disease risk, perhaps on a daily basis,” MIT writes. 

The consequences of this research could be even more ubiquitous than that according to the writers. “Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved,” the research paper reads. Let’s all raise a glass — and our voices — to the possibility of that.

Artificial Intelligence is being implemented in industries all over the world and is a central theme of the research undertaken at UCIPT. Our work in the HOPE study is using data to assess and shift behavioral outcomes among HIV and other populations.

Leave a comment

Your email address will not be published. Required fields are marked *