“People just don’t worry about being scammed”

image source, Clark Hoefnagels

Image caption, Clark Hoefnagels created an AI-based tool that recognizes fraudulent emails

  • author, Jane Wakefield
  • role Technology reporter

When Clark Hoefnagels grandmother was conned out of $27,000 (£21,000) last year, he felt compelled to do something about it.

“I felt like my family was vulnerable and I had to do something to protect them,” he says.

“There was a sense of responsibility to handle all things tech for my family.”

As part of his efforts, Mr. Hoefnagels, who lives in Ontario, Canada, managed the scam or “phishing” emails his grandmother received through the popular AI chatbot ChatGPT.

He was curious to see if she would recognize them as fraudulent, and she immediately did.

From this was born an idea that has since grown into a business called Catch. It is an AI system that is trained to recognize fraudulent emails.

Currently compatible with Google’s Gmail, Catch scans incoming emails and highlights any that are considered fraudulent or potentially so.

AI tools like ChatGPT, Google Gemini, Claude and Microsoft Copilot are also known as generative AI. This is because they can generate new content.

Originally, it was a text response in response to a question, request, or starting a conversation with them. But generative AI applications can now increasingly create photos and paintings, voice content, compose music or make documents.

People from all walks of life and industries are increasingly using such AI to improve their work. Unfortunately, so are the scammers.

In fact, there is a product sold on the dark web called FraudGPT that allows criminals to create content to facilitate a range of scams, including creating bank-related phishing emails or creating custom fraudulent web pages designed to steal personal information.

More worrisome is the use of voice cloning, which can be used to convince a relative that a loved one needs financial help, or even in some cases to convince that the person has been kidnapped and needs a ransom paid.

There are some pretty alarming statistics about the scale of the growing AI fraud problem.

Reports of AI tools being used to attempt to defraud bank systems increased by 84% in 2022, according to the latest data from anti-fraud body Cifas.

The situation is similar in the US, where a report this month said AI “has led to a significant increase in the sophistication of cybercrime”.

image source, Getty Images

Image caption, Studies show that fraudsters are increasingly using AI

Given this growing global threat, you can imagine that Mr. Hoefnagels’ Catch product would be popular with members of the public. Unfortunately, this is not the case.

“People don’t want it,” he says. “We’ve learned that people don’t worry about scams, even after they’ve been scammed.

“We talked to a guy who lost $15,000 and told him we were going to catch the email, but he wasn’t interested. People don’t care about any level of protection.

Mr. Hoefnagels adds that this particular person simply didn’t think it would happen to him again.

The group that worries about being scammed, he says, is older people. Yet, instead of buying protection, he says their fears are more often assuaged by a very low-tech tactic — telling their kids to simply not respond or to respond at all.

Mr Hoefnagels says he fully understands this approach. “After what happened with my grandmother, we basically said ‘don’t answer the phone if it’s not in your contacts and don’t go into emails anymore.’

As a result of the apathy Catch has faced, Mr Hoefnagel says he is now closing the business while looking for a potential buyer.

More AI stories

While people can be callous about fraud and fraudsters are increasingly using special AI, banks can’t afford to be.

Two-thirds of financial firms now see AI-powered fraud as a “growing threat,” according to a global survey from January.

Fortunately, banks are now increasingly using AI to fight back.

AI-powered software created by Norwegian startup Strise has been helping European banks detect fraudulent transactions and money laundering since 2022. It automatically and quickly reviews millions of transactions per day.

“There are many pieces of the puzzle that you have to put together, and AI software allows the checks to be automated,” says Strise co-founder Marit Rödevand.

“It’s a very complex business and compliance teams have grown dramatically in recent years, but AI can help connect that information very quickly.”

Ms Rødevand adds that it’s all about staying one step ahead of criminals. “A criminal should not care about the law or compliance with it. And they’re also good at sharing data, while banks can’t because of regulations, so criminals can move on to new technologies more quickly.

image source, Marit Roedevand

Image caption, Marit Roedevand says the battle for businesses like hers is to stay ahead of cybercriminals

Featurespace, another tech firm that makes AI software to help banks fight fraud, says it’s noticing things that are unusual.

“We don’t track the behavior of the fraudster, but instead we track the behavior of the real customer,” says Martina King, CEO of the Anglo-American company.

“We’re building a statistical profile around what a good normal looks like. We can see, based on the data the bank has, whether something is normal behavior or anomalistic and incorrect.

The firm says it now works with banks such as HSBC, NatWest and TSB and has contracts in 27 different countries.

Back in Ontario, Mr. Hoefnagels says that while he was initially frustrated that more members of the public didn’t understand the growing risk of fraud, he now understands that people just don’t think it’s going to happen to them.

“It made me kinder to people and [instead] to try to put more pressure on companies and governments.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top