A New Weapon Against an Old Form of Fraud
Fraud is a growing issue in the financial services industry, and the threat is evolving faster than ever as fraudsters take advantage of advancements in technology like generative AI. Financial institutions are seeing growth in the number of fraud attacks, and 80% of these attacks are taking place in digital channels.
And yet, check fraud, an “analog” form of financial fraud, is also growing – almost 400% since 2020, according to industry data. The good news: 93% of the financial institutions Fiserv speaks to say they believe generative AI will help fortify fraud detection to protect customers and members. They’re aware of the potential of AI technology, and are looking for solutions.
To explore the rise in check fraud and the state of AI fraud detection, Justin Jackson, SVP of Enterprise Payment Solutions at Fiserv, spoke with Bliink AI chief technology officer Alak Das. Bliink AI is an independent technology firm that creates fraud prevention solutions that use generative AI to help financial institutions ensure compliance, data privacy and security.
Jackson and Das discussed how AI tools can be used to fight fraud and strengthen security at financial institutions without hurting the client experience, and while encouraging customer adoption.
The conversation transcript was edited for length and clarity.
Jackson: Alak, why does check fraud remain such a major issue, even with the digital transformation underway at many financial institutions?
Das: A check is an easy target, with the lowest entry barrier for fraudsters. Physical checks are still used widely by the small business sector, and even by government. A physical piece of paper, handled by many people and often traveling from place to place through the mail, can easily be stolen. Fraudsters can then use a simple chemical to wash the ink from the check and change the amount.
The good news, from Bliink’s point of view, is that AI solutions are now bringing automation capabilities to check fraud prevention. Thanks to artificial intelligence and machine learning, image analytics has matured to a point where we can more easily identify whether a suspect check is first party or third party, or whether it was whitewashed or changed in some other way. AI is going to play a major role in check fraud detection, especially the generative AI models that are available to us.
Jackson: Detecting the theft and washing of checks by third parties is very different from detecting fraud committed by someone with access to the account. How do you see AI playing a role in the detection of first-party fraud?
Das: AI is taking us to a point where we can be very confident in identifying potential first-party or a third-party fraud, and distinguishing between the two. That's a very important first step in revealing fraud.
With today's technology, once we have your signature, AI models can compare the latest signature with the ones on the last 20 checks, even including the impression depth of the ink, to detect any deviation. Examples might be the shape or extension of individual letters. The models can go down to that level of an anomaly. A deviation doesn't necessarily mean a given check is fraudulent, but depending on the confidence score generated by the software, we can decide whether it should be investigated, allowed to go through or put the account itself on some kind of watchlist.
Jackson: Is that how you're seeing the industry adopt these AI tools, as support for human decision making?
Das: Yes, but it goes beyond just analyzing the check image or the physical check itself. We look at the whole of customer behavior: How many checks in a month does this customer typically write? What is the average check size? What are the different places these checks are paid to?
These are the kinds of data we collect and keep, as well as behavior in digital channels. All these data points from various systems help create a customer risk rating. And that is an early indicator that can help direct human detection. That’s what AI tools are enabling us to do.
Jackson: When a customer is new and being onboarded, we don't necessarily have all that behavioral data. Does that create added friction with the use of these systems early in the life cycle? That’s a big concern for financial institutions in dealing with new customers.
Das: I would say no. True, when a customer gets onboarded, we really do not have enough history. But we do know how old the customer is. If I have a customer that has been out there for ten years, we can gather enough data to begin to create a customer behavior profile at the time of onboarding.
You can profile their payment behavior, their account behavior, how old the account is, who the beneficiaries are. That’s the kind of data we use to train our AI model.
Then comes the new generative AI piece of it, which has a lot of regulatory issues associated with how you use it. But assuming we are able to use it, we use all the signals and alarms detected to train the model. Then we can predict the likelihood of fraud in a specific case, perhaps because we have already seen evidence of something else, such as a “mule” account. Those are the kinds of things having behavioral data enables us to detect.
Fighting synthetic identities with AI
A growing problem is the use of AI to convincing synthetic identities and “deep-fake” audio and video. AI tools are now good enough and fast enough that a fraudster can speak into a computer that interprets his voice and speech patterns, and translates it into a different voice for playback through a phone line, in real time.
So, a foreign national working a scam can make a phone call in unaccented English, even in the voice of the opposite gender, to an individual accountholder, a bank’s call center, or even to a financial institution’s senior leadership. This technique is used to fish for personal information, access accounts or get spurious payments and withdrawals approved.
Even video calls can be manipulated. “One example we heard about was a call from a bank CEO, contacting the CFO and saying to authorize some payments,” Das said. “But the image of the ‘CEO’ on the video was completely AI generated.”
Those kinds of attacks are scary, real and becoming more frequent. Das said there are new AI tools to help detect the hidden markers in these false communications, and financial institutions need to embrace them.
“With AI technology being commoditized and available to bad actors, synthetic IDs and ID theft have become a major concern,” he said. “And that means using AI to fight fraud is no longer just an option – it’s going to be required every step of the way. We have to fight AI with AI.”
Jackson: What kind of guidance would you give banks and credit unions right now to help manage fraud within the small business and corporate commercial segments?
Das: The first question is, how do you get your customer to opt in to fraud prevention? I believe the default should be “opt in,” and customers should then have the ability to opt out. If you make fraud protection optional upfront, you’ll probably lose almost 60% of your customers, and then they can’t be protected no matter what you do.
Second, you have to be careful not to create a solution that is just one more technology to be learned about and be aware of. Fraud prevention should not be in the forefront causing friction. That takes customers’ attention away from their day-to-day work. If you can instead embed a solution that is easy to use while staff is logged in to process payments, people will be much more comfortable with it.
Third, remember that friction in the fraud prevention process isn’t just an adoption issue with small businesses; it may become an attrition issue. Requiring too much extra effort to prevent fraud is distracting and burdensome for customers operationally. You have to try to provide these services silently, in the background, or it may become a reason for a customer to move away.