In April 2019, hackers successfully infiltrated the email accounts of several staff members at St. Ambrose Catholic Parish in Ohio.

Through email communication between church leaders and their construction firm, the hackers learned details about the recent restoration and repair of the church, and they subsequently sent invoices alleging non-payment on a “past due account.” Church officials quickly wired $1.75 million into a fraudulent account, only to discover that they had become victims of an online crime scheme.

In June 2021, a San Francisco homeless charity, Treasure Island, suffered a month-long attack.

Hackers obtained access to the organization’s bookkeeper’s email, finding and manipulating a legitimate invoice from one of Treasure Island’s partner organizations. The staff then wired $625,000 from a loan, intended for the partner, into a cybercriminal’s account.

In both instances, the criminals targeted small, non-profit organizations, breached the email accounts of key personnel, learned patterns of behavior and information about key contacts, and then fabricated a realistic story with a supporting invoice that was then paid by the victim.

Both the church and the homeless organization were victims of Business Email Compromise attacks. In the past, these attacks were easily recognized: an invoice, for example, from an unknown company or for unrequested services; a demand for payment with terrible spelling and grammatical errors. Criminal gangs eventually became savvy, pretending to be a contact familiar to the recipient with relevant details and with a similar writing style. With the introduction of ChatGPT in November 2022, however, these criminal hackers acquired their most astounding tool yet to assist them with their work.

With its enormous capacity to enhance human productivity, artificial intelligence (AI) and the resulting platforms developed for consumer use (ChatGPT, Bing Chat, and Google Bard, most notably) have garnered much public attention in the last few months.

The potential for this generative AI is virtually limitless, and society has only begun to imagine and discuss its implementation. Unfortunately, the criminal world is actively exploiting the technology, even recently launching a black hat, or criminal, tool which enables a host of advanced, offensive capabilities, including sophisticated phishing, social engineering, and business email compromise scams. Those savvy criminal gangs that took the time to learn about their targets’ lives and friends in order to create fake emails just got a leg up. WormGPT now allows these criminals to create highly plausible communications that include detailed, relevant information, all in a language that may not be native to the hacker himself. The generated emails are entirely convincing, and even the most cautious users will often find them credible. In fact, according to SlashNext, an email security provider, “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

While AI functionality benefits the fight against cybercrime in profound ways, that benefit is balanced by the extraordinary advance in criminal capability.

And while a savvy, trained end user may catch even the best phishing emails or incoming messages from purported business associates, other malware threats generated by AI can lie below the surface, spreading throughout a company’s network in new, highly pervasive ways.

We’ll talk more about these stealth attacks in future blogs, but in the meantime, if you’re concerned about your company’s security and want a professional to assess your company’s IT infrastructure, please reach out to us.

Give us a call at (205) 623-1200 or click here for more information.