AI-Powered Fraud: Immediate Action Steps to Protect Companies From Next-Generation Payment Scams

The Google Threat Intelligence Group revealed a chilling reality: nation-states are weaponizing AI tools like Gemini for sophisticated cyberattacks. This new frontier of AI-powered fraud demands immediate attention from business leaders and general counsel, who stand at the confluence of technology, data security, and governance.
Recent Incidents and the Evolving Sophistication of These Attacks
Generative AI, like the tools used by these cybercriminals, can create highly convincing text, images, voice recordings, and even video interactions that are nearly impossible to distinguish from genuine content. In the report Adversarial Misuse of Generative AI, the Google Threat Intelligence Group explains how more than 20 countries have used Google’s generative AI tool named Gemini for nefarious purposes, including cyber espionage, destructive computer network attacks, and attempts to influence online audiences in a deceptive, coordinated manner.
The report explains how cyber criminals, especially in Iran, China, and Russia, are using Gemini to create impeccably real AI-generated content to facilitate advance phishing techniques and fraudulent wire transfer requests. The report states that criminals are using Gemini for research; content generation, including developing personas and messaging; translation and localization; and to find ways to increase their reach.
The criminals are also using Gemini to teach them how to deliver a payload to access a company’s network system, to move laterally within the network, to evade detection and privilege escalation, and to remove data.
AI-enabled social engineering has improved the ability of cybercriminals to create highly personalized and more sophisticated content than historical social engineering attempts. Cybercriminals are increasingly using AI to create realistic and interactive audio, video, and text that allows them to target specific individuals by email, telephone, text, videoconferencing, and online postings. Some recent examples are listed below.
Video
In February 2024, a Hong Kong finance worker was tricked into transferring $25 million to criminals after they set up a video call in which the other five people participating, including the company’s chief financial officer, was a video deepfake.
Voice Recordings
In August 2019, a senior executive at a UK-based energy firm was tricked into wiring approximately $243,000 based on an AI-generated voice deepfake that accurately mimicked the distinct German accent of the chief executive of the firm’s parent company, who requested an urgent wire transfer of funds.
Facial Recognition
In June 2024, criminals used deepfake technology to bypass facial recognition security measures allowing them to steal $11 million from a company’s cryptocurrency account.
Fake Resumes
In May 2024, the U.S. Department of Justice said more than 300 U.S. companies unknowingly hired foreign nationals with ties to North Korea for remote IT work, sending $6.8 million of revenues overseas in a sprawling fraud scheme that helped the country fund its nuclear weapons program.
Immediate Action Steps for Companies
To protect companies from falling victim to AI-generated wire transfer or other payment scams, take the following action steps:
- Designated Payment Account: Specify that payments to a vendor will only be made to a single, designated bank account, and any changes to this account must be made in writing and verified through a secure, pre-established process. The vendor must designate the bank account in the vendor’s contract or through a binding vendor payment agreement. Remind vendors of the designated payment account and verification process by disclaimers and statements in purchase orders.
- Verification Process: Establish a multi-step verification process for any changes to a vendor’s payment information. The process should include:
-Written notice from the vendor requesting the change on official letterhead;
-Verbal confirmation from two designated vendor representatives, whereby the company calls each vendor representative separately using pre-established phone numbers to confirm their authorization of the change;
-Management review, confirmation, and approval of the payment information change; and
-A waiting period of at least 48 hours before any changes are implemented.
- Regular Account Verification: Establish a schedule for regular verification of vendor’s payment information, such as annual confirmations of the authorized representatives and their phone numbers. Document the verifications. For companies with hundreds or even thousands of vendors, prioritize which are contacted by using the deductible as a cutoff. For instance, for a $50,000 deductible, make it a priority to verify the information of vendors who regularly receive payments in amounts of more than $50,000.
- Training Staff: Train employees to presume that any request to change a vendor’s payment information is fraudulent. Conduct regular cybersecurity training to alert employees, management, and IT staff to the threats posed by AI-generated wire transfer or other payment scams. Document the training.
- Risk Assessments: Conduct regular AI-focused risk assessments to measure how well employees are following established data security protocols. Consider conducting simulated attacks on the employees and management who have the authority to update a vendor’s payment information. Do those employees notice and act on banner warnings for emails that originate outside of the company or from email addresses from which they normally do not receive emails? Are they performing and documenting the call back requirements?
- Update Policies and Procedures: Keep up to date on the new ways in which cybercriminals are attacking their victims. Take those new attack methods into consideration when reviewing and updating payment verification procedures, which should be regularly and no less than annually.
- Confidentiality and Security: Require each vendor to represent and warrant that it will maintain strict confidentiality of vendor payment instructions, the identities and contact information of the vendor’s authorized representatives and follow best practices to maintain security measures to protect their email and other communication systems.
- Establishing Binding Authority: Clearly state in contracts or vendor payment agreements that only specific, named individuals designated by the vendor have the authority to request changes to the vendor’s payment information and that their actions are binding on the vendor.
- Liability Clause: Include a clause that states the vendor releases the company of any liability and agrees to hold it harmless from any losses, damages, and legal actions resulting from fraudulent wire transfer requests originating from the vendor’s compromised systems. Specify that the vendor bears responsibility for any losses resulting from compromised email accounts or false instructions originating from their end. This is especially important because there is case law holding the company remains obligated to pay its vendor if the vendor was unaware of the fraud and the company was in the best position to avoid the loss.
- Indemnification Clause: Include a clause that states the vendor agrees to defend and indemnify the company, from any losses, damages, and legal actions resulting from fraudulent payment requests originating from the vendor’s compromised systems and authorizations obtained following the verification process from the vendor’s designated authorized representatives who are contacted using the vendor’s designated phone numbers.
- Audit Rights: Include a clause allowing the company to audit the vendor’s security practices related to payment instructions, changes to payment instructions, and email communications. Include the right to review the vendor’s cyber insurance coverage to ensure that the insurance coverage is adequate and will actually protect the company in the event of a loss requiring indemnification. Again, prioritize which vendors to audit based on covering the deductible or the company’s tolerance for how much money it is willing to lose if the insurance coverage turns out to be inadequate.
- Dispute Resolution: Specify a clear process for resolving any disputes related to payment instructions, including governing law and jurisdiction.
- Termination Rights: Reserve the right to immediately terminate the agreement if there’s evidence of fraudulent activity or repeated suspicious requests from the vendor.
Incorporating these terms and conditions can significantly reduce the risk of falling victim to email scams and protect the company from potential losses due to fraudulent payment scams.
Conclusion
As AI technology rapidly evolves, so do the threats to companies’ financial security. Implementing robust safeguards against AI-generated fraud is no longer optional — it’s critical to businesses. In Taft’s upcoming articles, insurance coverage and legal remedies available to companies victimized by AI-generated fraud will be explored.
In This Article
You May Also Like
AI Blind Spot: Is the Board Exposing the Company to Uninsured AI Risks? Navigating AI in Insurance: Delaware’s Bulletin No. 148 and New Jersey’s Bulletin 25-03