Cybersecurity Threats to Financial Services Emerge with Growth of AI

The hit film Terminator 2: Judgment Day cemented Arnold Schwarzenegger’s leading-man status with his portrayal of a reprogrammed T-800 Terminator assigned to help humanity stop the artificial intelligence Skynet from taking over the world. What was intended to be fanciful fiction in 1991 seems prescient today. As AI advancements in the real world continue to grow at a rapid pace, so too do cybersecurity risks for all industries – particularly the financial services sector.

Last week, the New York State Department of Financial Services advised firms under its purview to be more vigilant in monitoring and assessing the risks emerging from AI-enabled tools. In an industry letter dated October 16, DFS released guidance designed to help businesses better understand and assess cybersecurity risks – such as social engineering, cyberattacks and the theft of nonpublic information. The guidance does not impose any new requirements beyond the current obligations per its cybersecurity regulations, the DFS said. Instead, the department intended to explain how firms should use the existing regulatory framework to assess and address AI-related cyber risks.

The DFS highlighted four of the “more concerning threats” identified by cybersecurity experts. The first two pertain to risks caused by threat actors’ use of AI and the latter two relate to risks caused by a firm’s use or reliance on AI. They include:

  • AI-Enabled Social Engineering. DFS said AI has made it easier for threat actors to create “highly personalized and more sophisticated content that is more convincing than historical social engineering attempts.” Moreover, they are increasingly using AI to create deepfakes, including realistic and interactive audio, video, and text, DFS said.
  • AI-Enhanced Cybersecurity Attacks. This “major” risk involves an increase in the potency, scale and speed of existing types of cyberattacks. Since AI can work faster and more efficiently than humans, threat actors gain access to information systems at a more rapid rate. Once inside an organization’s system, AI can quickly identify how best to deploy malware and access and exfiltrate nonpublic information. AI can also develop new malware variants faster and then change ransomware to evade detection. Combined, these threats could increase the number and severity of cyberattacks in the financial services sector, where the highly sensitive nonpublic information “creates a particularly attractive and lucrative target for threat actors,” according to DFS.
  • Exposure or Theft of Vast Amounts of Nonpublic Information. Firms that maintain nonpublic information in large quantities are especially vulnerable, according to the DFS. Threat actors have a greater incentive to target them to extract nonpublic information for “financial gain or other malicious purposes,” the DFS said.
  • Increased Vulnerabilities Due to Third-Party, Vendor and Other Supply Chain Dependencies. Because data gathering frequently involves working with vendors and third-party service providers, each link in the supply chain introduces potential security vulnerabilities that can be exploited by threat actors, the DFS said.

And how are financial statement issuers dealing with AI in their public disclosures? A survey of filings with the Securities and Exchange Commission compiled using the Intelligize+ AI™ platform found little to go on.

For example, Deutsche Bank mentioned AI in a filing from March, noting its applications posed a cybersecurity risk to the company. In a Form 10-K filing from August, Seagate Technology Holdings PLC offered a laundry list of cybersecurity threats stemming from AI: “For example, attacks could be crafted with an AI tool to attack information systems by creating more effective phishing emails or social engineering or by exploiting vulnerabilities in electronic security programs utilizing false image or voice recognition, or could result from our or our customers or business partners incorporating the output of an AI tool, such as malicious code from an AI-generated source code.”

If a March report from the Treasury Department is any indication, companies in the financial services sector still have plenty of work left to do to inoculate their IT against AI threats. For instance, the report mentioned that companies feel trepidation about the likelihood that their security framework can hold up against generative AI hacking techniques. “As access to advanced AI tools becomes more widespread, it is likely that, at least initially, cyberthreat actors utilizing emerging AI tools will have the advantage by outpacing and outnumbering their targets,” the report said.

Latest Articles

Companies Forced to Confront Geopolitical Risks

When JPMorgan Chase CEO Jamie Dimon talks, people in the business world listen. Some of his remarks in the banking giant’s latest earnings release sent a chilling message. “We have...

Read More

Cyber Disclosure Rules Yet to Cause Market Declines Once Feared

Despite long-simmering dread that the Securities and Exchange Commission’s cybersecurity disclosure rules would cause share prices to plunge, research indicates companies realized...

Read More

SEC Goes After “Fake It Till You Make It” Fraudsters

In 2022, a jury convicted Theranos Inc. founder Elizabeth Holmes of perpetrating an audacious fraud against investors in her blood-testing company that turned the Stanford Universi...

Read More