AI: The Next Frontier in Antitrust Law and Regulation

Having completed “first tech monopoly trial of the internet era” against Google for its search advertising businesses, the federal government is now exploring potential antitrust issues in artificial intelligence.

According to Jonathan Kanter, the assistant attorney general in charge of the Department of Justice Antitrust Division, emerging issues in AI deserve immediate attention. In an interview with the Financial Times, Kanter said his team is digging into “monopoly choke points and the competitive landscape” in AI. The fear is that a few well-resourced companies have already gobbled up most of the market power over the latest transformative technology.

The Federal Trade Commission already began an inquiry in January seeking more information from major tech companies regarding their investments and partnerships across the AI sector. “We are scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition across layers of the AI stack,” said FTC Chair Lina Khan. Specifically, Khan said the FTC wants to know if some of the new AI partnerships represent end-runs around formal merger reviews.

In fact, the Justice Department and FTC have divvied up responsibility for overseeing competitors in the AI market: DOJ gets Silicon Valley phenom Nvidia Corp.; the FTC is taking on venerable tech giant Microsoft Corp. and OpenAI, the creator of ChatGPT. A similar project five years ago produced the antitrust lawsuit against Google and other cases involving big tech power players such as Apple and Amazon.

Regulators in Europe are probably wondering what took so long for AI scrutiny to ramp up in the U.S. European Union policymakers in December approved the AI Act, a groundbreaking law intended to govern the use of AI technologies. The EU regulations primarily aim to monitor AI applications that could do the most damage – infrastructure and security risks, for instance. Additionally, developers of AI systems would be subject to new transparency requirements, and technologies used to make so-called deepfake images and videos would be required to label AI-generated outputs.

Despite the efforts of the Biden administration to police AI more vigorously, lawmakers in the U.S. seem far more reluctant than their European counterparts to intervene in the market. (Keep in mind that the first draft of the AI Act circulated in 2021.) In May, a bipartisan group of senators including Senate Majority Leader Chuck Schumer of New York proposed a $32 billion spending plan for AI research and development. Notably absent: Any specific details about regulating the sector.

It shouldn’t come as a surprise that legislators would hesitate to put a leash on what some are projecting to be a trillion-dollar business. However, if they continue putting off serious efforts to regulate AI, who knows what the sector will look like when they finally decide to act.

Latest Articles

Five Big Questions About Trump’s Plan for Tariffs on China

President-elect Donald Trump made the geopolitical rivalry between China and the United States a key theme of his campaign during the 2024 election cycle. Trump and his advisers ha...

Read More

SEC Dings SolarWinds Victims for Cybersecurity Disclosures

Last month, the Securities and Exchange Commission settled four enforcement actions against current and former publicly traded companies for making what it deemed “materially misle...

Read More

Southwest Airlines Makes Concessions to Thwart Proxy War

Southwest Airlines has long cherished its reputation for doing air travel differently than other major characters. Among its most famous quirks, Southwest has been known for its op...

Read More