Over the last few years it’s become commonplace to receive text messages on your watch or track your health through your phone as technology has changed how we interact with ourselves and others.
With connectivity becoming an increasingly important part of our everyday lives, researchers at the University of Ottawa’s Faculty of Engineering are exploring ways to use artificial intelligence and simulations to create life-changing wearable technology and smarter assistive software.
The future of movement
After studying robotics and experimenting with AI and simulation, Prof. Thomas Uchida realized there was an opportunity to take his knowledge and apply it to the human body.
Uchida and his team work with simulation software that replicates the human musculoskeletal system to better understand how joints and muscles create movement and how much energy and force it takes to make that movement happen.
Replicating the inner workings of the body can lead to better-functioning mobility devices, says Uchida, whose research is helping other engineers design and test wearable technology.
“We’re very familiar with crutches and wheelchairs, but there’s a new class of assistive devices called exoskeletons – structures that attach to the body and apply force to help the body to move,” he explains.
“If you apply these forces in the right places, and with the right timing, you can reduce the amount of energy your muscles are expending during walking or running, which can help individuals with mobility issues.”
The problem with designing these devices is that everyone is physically unique, so developing a single prototype isn’t practical.
Because of that, it can be difficult and expensive to test these systems, which is why the team’s simulations and research are critical.
Outside of mobility technology, Uchida says there is an opportunity to apply the research to other areas, from preventing repetitive strain injuries in assembly line workers to rethinking the way we design desk chairs or shoes.
“Many of the products that interact with our bodies are not well-designed and cause us pain, discomfort or injury,” he says. “So the general problem is, how do you design or engineer a device that meshes with or complements the body? Simulations and computational models can help us do that.”
While wearable tech is on the rise, so are assistive technologies such as Google Home and Amazon Alexa, which Prof. Hussein Al Osman believes can be taught to interpret human emotion.
Using affective computing – the development of systems that can interpret and simulate human actions or reactions – Al Osman is looking at how we can teach robots to be emotionally intelligent to better serve our needs and tailor responses to a person’s mood.
“Machines are completely oblivious to our emotions,” he says. “If they can understand the emotion, that would lead to a more natural, helpful, tailored interaction.”
Robots and other technology can be taught to recognize emotion through images by learning facial expressions, as well as through natural language processing or learning to understand tone of voice and intonation.
There is even potential to use this research to assist in the diagnosis of mental illness, says Al Osman.
“We’re using the same affective computing technology that we’ve developed, but now instead of trying to classify emotions, we’re trying to classify bipolar disorder states,” he says, adding that there is potential to apply the machine learning technology in a range of medical scenarios.
“Whether it’s tools that help clinicians better diagnose patients or an assistive robot for a senior citizen, there are many opportunities to improve and impact the lives of everyday citizens with this type of technology.”