AI: The Next Frontier in Antitrust Law and Regulation

Having completed “first tech monopoly trial of the internet era” against Google for its search advertising businesses, the federal government is now exploring potential antitrust issues in artificial intelligence.

According to Jonathan Kanter, the assistant attorney general in charge of the Department of Justice Antitrust Division, emerging issues in AI deserve immediate attention. In an interview with the Financial Times, Kanter said his team is digging into “monopoly choke points and the competitive landscape” in AI. The fear is that a few well-resourced companies have already gobbled up most of the market power over the latest transformative technology.

The Federal Trade Commission already began an inquiry in January seeking more information from major tech companies regarding their investments and partnerships across the AI sector. “We are scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition across layers of the AI stack,” said FTC Chair Lina Khan. Specifically, Khan said the FTC wants to know if some of the new AI partnerships represent end-runs around formal merger reviews.

In fact, the Justice Department and FTC have divvied up responsibility for overseeing competitors in the AI market: DOJ gets Silicon Valley phenom Nvidia Corp.; the FTC is taking on venerable tech giant Microsoft Corp. and OpenAI, the creator of ChatGPT. A similar project five years ago produced the antitrust lawsuit against Google and other cases involving big tech power players such as Apple and Amazon.

Regulators in Europe are probably wondering what took so long for AI scrutiny to ramp up in the U.S. European Union policymakers in December approved the AI Act, a groundbreaking law intended to govern the use of AI technologies. The EU regulations primarily aim to monitor AI applications that could do the most damage – infrastructure and security risks, for instance. Additionally, developers of AI systems would be subject to new transparency requirements, and technologies used to make so-called deepfake images and videos would be required to label AI-generated outputs.

Despite the efforts of the Biden administration to police AI more vigorously, lawmakers in the U.S. seem far more reluctant than their European counterparts to intervene in the market. (Keep in mind that the first draft of the AI Act circulated in 2021.) In May, a bipartisan group of senators including Senate Majority Leader Chuck Schumer of New York proposed a $32 billion spending plan for AI research and development. Notably absent: Any specific details about regulating the sector.

It shouldn’t come as a surprise that legislators would hesitate to put a leash on what some are projecting to be a trillion-dollar business. However, if they continue putting off serious efforts to regulate AI, who knows what the sector will look like when they finally decide to act.

Latest Articles

Accounting Errors Dim Holiday Outlook for Macy’s, Other Companies

From jolly television personality Al Roker cruising around New York City during the retailer’s annual Thanksgiving Day Parade to the Christmas classic Miracle on 34th Street, few b...

Read More

Crypto Lobby Boosts GOP Effort to Secure Sole Control of SEC

With the waning days of the current congress upon them, Senate Democrats appear to be fighting an uphill battle to secure the renomination of Caroline A. Crenshaw as a commissioner...

Read More

Trump Makes Conventional Pick to Helm SEC in Crypto Champion Atkins

President-elect Donald J. Trump has picked a familiar face to lead the Securities and Exchange Commission, tabbing 66-year-old Paul Atkins to return to the agency where he served a...

Read More