This content is made possible by our sponsors. Learn more about our OBJ360 content studio here.

Bridging the language gap between human and machine

uOttawa Faculty of Engineering lab teaches AI to truly understand written communication

Engineering
Engineering

When it comes to human-machine interaction, you could say that language is the last frontier.

Natural language processing, or NLP, is the branch of artificial intelligence that deals with the interaction between computers and humans using natural language. In this field, adaptive algorithms are trained to understand and draw conclusions from human languages.

The applications are far-reaching, from large-scale content analysis, to more effective engagement through voice interfaces, newsgathering and predicting trends or behaviours in a target population over time.

Prof. Diana Inkpen is the director of the University of Ottawa’s Natural Language Processing Lab. Her team of PhD and MSc students are pushing this frontier, primarily through applied research that responds to the real-world needs of public and private-sector partners external to the university.

The lab’s speciality is extracting information and insight from text – what people write and how the machine can understand what these texts mean.

“Human language is very difficult in general for machines, because they don’t have the context of experience that we do to understand nuances of meaning, or to classify text by emotion or mood,” Prof. Inkpen said. “We have to train an algorithm by feeding it large volumes of the right kind of data specific to the domain knowledge we want it to have.”

This is her passion. Prof. Inkpen has served as co-chair and has been an invited speaker at many conferences relating to AI and natural language processing, published more than 30 journal articles and 100 conference papers, and contributed to several books on the subject. She also serves as editor-in-chief of the Computational Intelligence journal (Wiley) and as associate editor of the Natural Language Engineering Journal (Cambridge University Press).

Real-world applications

Her team has a variety of projects on the go. These include:

  • Crawling the web to analyze what people share publicly on social media, to gauge mental health and develop predictive models that can flag situations where intervention may be required.
  • Tracking children’s communication online, to alert parents of incidents of cyberbullying, substance abuse or mental health challenges.
  • Training chatbot algorithms to have the specific domain knowledge necessary to provide consumers with a single and easy way to find key information in those situations where a general-purpose voice assistant like Siri or Alexa falls short.
  • Text mining for legal purposes, a new project in collaboration with the Faculty of Law at uOttawa and with the Department of Justice. The goal here is to save people from having to spend tens of hours wading through reams of documentary evidence to flag items of interest that may be relevant to a case.

Protecting privacy, avoiding bias

What’s interesting about this work is the way privacy factors into both ends of the equation.

On the one hand, an adaptive algorithm designed to learn over time can only be trained by feeding it as much relevant real-world data as possible. This data must be secured by the research team to ensure there is no risk of a privacy breach if any sensitive personal information is involved. The other side of it is on the application of an algorithm – that its real-world use abides by any relevant privacy regulation.

Prof. Inkpen notes it’s also not just about enough data, but the right data, to ensure fairness and avoid bias. One example is the case of Amazon and an algorithm it had employed to screen job applicants. In 2015, it was discovered that the algorithm was biased against women because it had been trained using resumes from the past 10 years, the vast majority of which had been submitted by men.

Ultimately, Prof. Inkpen’s work is about making that transfer from the university lab to the real world where her work can improve lives.

“Technology transfer is something we value because that is the purpose of applied research to solve real problems and, in our case, determine how AI can better help real humans,” she said.

Learn more

Discover other cool areas of applied research that are underway at uOttawa’s Faculty of Engineering at engineering.uOttawa.ca.