How Your Smart Speaker or Smart Phone Can Save You from Cardiac Arrest
Researchers have developed a new AI system that can help save lives in the event of cardiac arrest. If you’ve never seen someone go into cardiac arrest, the signs are pretty startling. A patient will suddenly become unresponsive, or begin gasping for air. That gasping is known as agonal breathing.
If someone is around to provide CPR, it can double or triple a patient’s chances of surviving the event. But, what happens for those who live alone or if household members are asleep or out of the house? Often, it can spell certain death.
So, the researchers aimed to increase the chances of survival for a cardiac arrest victim by designing a new smart speaker skill (Google Home & Alexa) and a smartphone app that can monitor the room to listen for agonal breathing. Researchers at the University of Washington analyzed real agonal breathing events recorded from 911 calls to teach the program how to recognize the sounds. It is common for 911 dispatchers to ask a caller to hold the phone up to the victim to listen for the distinct sound which happens in about 50% of all cardiac arrest cases. This helps them predict that it is cardiac arrest and instruct callers to immediately begin CPR while they alert paramedics who are in-route.
The program was able to correctly identify agonal breathing 97% of the time, from a distance of up to 20 feet away. It will then sound an alert for other people in the house, and if no one responds, it can automatically call 911. They published their findings on June 19th, in npj Digital Medicine. One of the most groundbreaking aspects of this system is that it is contactless so it doesn’t require any devices to be attached to a patient.
Collecting the data was no easy task. They collected 162 emergency calls to Seattle’s EMS, occurring between 2009 & 2017. From those calls, they were able to extract 2.5 seconds at the beginning of each agonal breath, resulting in 236 clips. After applying various machine learning techniques, they were able to create a dataset of 7,316 positive identification clips.
In order to make sure the program could distinguish between agonal breaths and other sounds, they used 83 hours of audio collected from various sleep studies. These sounds consist of other breathing difficulties like those heard in sleep apnea patients and loud snorers. This produced 7,305 negative identification clips. They also added various common noises like the sounds of traffic, dogs, cats, and air conditioning.
Though it is not available just yet, the researchers envision creating an app and smart speaker skills to make it available to the public. It would run locally on the device, so there would be no need for storage or uploading information to the cloud. However, though it had a promising correct identification rate, they said they will need more sound samples to improve the program.
Once approved for release, it will be commercialized through the UW subsidiary, Sound Life Sciences, Inc., which focuses on bringing new medical tech to the masses using existing hardware many of us already own.