Companies Identify Risk Factors of Artificial Intelligence
Twenty years ago, companies were just beginning to grapple with the “internet of things.” The emerging technologies forced them to manage compliance, regulatory and legal risks introduced by new data-privacy and security concerns.
Now, companies find themselves facing a thornier challenge: how to harness the potential benefits of artificial intelligence technologies while protecting themselves from new risks that could be lurking around the corner. Several companies in recent risk factor disclosures have addressed the potential dangers of AI — particularly its effects on regulatory compliance, the unreliability of generative AI, and the possibility of reputational harm.
GitLab Inc., which offers an open-source software development platform, said in a quarterly 10-Q filing last month that incorporating generative AI into some of its products may result in “hallucinatory behavior” and present “operational and reputational risks.” Similarly, cloud software developer Splunk Inc. in its most recent 10-Q detailed concerns about reputational harm and the legal and regulatory risks of developing and using AI in an uncertain regulatory environment. Incorporating AI technologies into new or existing products could result in “new or enhanced governmental or regulatory scrutiny” and could “adversely affect” its business, reputation or financial results, according to Splunk. The company added another consideration that seems to be flying under the public’s radar: “The intellectual property ownership and license rights, including copyright, surrounding AI technologies has not been fully addressed by U.S. courts or other federal or state laws or regulations.”
Enterprise software company Sprinklr in a 10-Q filing from 2022 pointed out that jurisdictions outside of the United States, EU and UK are beginning to pass more stringent data privacy and security laws, rules and regulations that may affect its operations. “Existing and future laws and evolving attitudes about data privacy and security may impair our ability to collect, use, and maintain data points of sufficient type or quantity to develop and train our artificial intelligence algorithms,” Sprinklr said.
Similarly, Ziprecruiter in a 10-Q cited the potential “reputational risks” and adverse effects of using AI. The company noted its brand could suffer if the “recommendations, forecasts, or analyses that AI applications assist in producing are deficient or inaccurate.” Additionally, the marketplace could sour on Ziprecruiter if “we enable or offer AI solutions that are controversial because of their purported or real impact on human rights, privacy, employment, or other social issues,” according to the company.
And if you’re looking for an even more sobering take on the potential impact of AI, check out the FY 2022 10-K filing from insurance company Lemonade Inc. filed this past March. “Our proprietary artificial intelligence algorithms may not operate properly or as we expect them to, which could cause us to write policies we should not write, price those policies inappropriately or overpay claims that are made by our customers,” Lemonade stated. “Moreover, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination.”
For all the early adopters that have jumped into AI with both feet, and for those concerned that AI may render their jobs unnecessary, shareholders may be heartened to see that their companies are taking a decidedly measured approach. At least for now, risk factors like IP, accuracy, inadvertent discrimination, and others send a clear message that while issuers acknowledge AI’s potential, they believe there must still be a human mind reviewing its output and decision making.