For most people, buying a home will be their biggest and riskiest financial transaction.
It’s one of the reasons the lawyers at Perley-Robertson, Hill & McDougall LLP/s.r.l. proactively identify issues to mitigate possible risks for their clients before the worst happens.
Until now, the biggest risks when buying your first home have been getting carried away in a bidding war, or waiving a home inspection only to discover mold in the basement.
But today there is a new risk to consider. Like many industries, real estate is being disrupted by artificial intelligence (AI) technology.
That is why the issue is top of mind for lawyers like Matt Mayo, a real estate associate with Perley-Robertson, Hill & McDougall LLP/s.r.l..
Mayo says in a market where bidding wars and a low housing stock have upped the pressure and cost of finding your dream home, you need to know if the people you are working with — realtors, mortgage brokers, house inspectors and lawyers — are using AI, and how.
What AI can safely do to support service professionals
To understand AI in the simplest of terms, AI is an algorithm — a set of instructions.
And it is important to remember the instructions were written by a person who brings their own biases and the possibility of bringing human error to the table.
Mayo says AI is fine for professionals to use when it is supporting what they do, rather than replacing it, like marketing or an AI-driven chat bot that can answer simple questions.
He compares other recent technological developments to illustrate this point.
“The purpose of technology like electronic signatures and secure email for example, is to make the delivery of professional services easier in some way,” said Mayo.
So when you ask your real estate professionals if they’re using AI, you will be safe working with them if it is being used as a tool to support their professional duties rather than replace them.
When do you need an experienced professional instead of a robot?
If AI is being used to replace a professional’s experience, cognition, judgment and expert decision making, Mayo says it’s a red flag for a couple of reasons.
First, there are certain things AI simply is not capable of right now.
Any information AI delivers is only as good as the accuracy and currency of its database and algorithm — and both of those rely on humans.
“AI is based on instructions, rules. Fraudsters succeed by finding the loopholes or vulnerabilities in a system,” said Mayo. “AI has no gut, intuition, or sniff test. If AI is used at all in this context, it must not replace anything a human can do.”
The second problem with AI concerns who is assuming the risk.
Professionals who use AI to perform a professional task like detecting fraud, knowing and understanding sophisticated trade terms, or even conducting research are enjoying the benefits of that technology, while passing on the risk of using it to their clients.
“The danger of using AI at all in this context is the subtle temptation to off-load the responsibility onto a tool not capable of handling it,” said Mayo. “People lean on the tools they are given. That is what tools are for.”
The aspects of work that ought to be handled by the professional directly include engaging with clients, research, fraud detection, interpretation of due diligence and other legal documents, and the application of expert knowledge to the individual circumstances of the client.
The bottom line? You cannot have a complex conversation with a robot.
“The client is paying for your brain, not an AI,” said Mayo. So when you need a professional, look for someone who is trained, licensed, experienced, and accountable.