IE 11 is not supported. For an optimal experience visit our site on another browser.

Biden White House takes baby step toward regulating AI — finally

Federal officials requested public comment on the potential harms of artificial intelligence, signaling concerns over the advanced technology.

By

It’s been a slow build, but we finally appear to be reaching a phase in which the federal government is taking the potential dangers of artificial intelligence seriously. 

This has been a topic of focus on The ReidOut Blog for nearly two years now, stemming from my concerns about the potential for AI technology to worsen inequality and to be misused by nefarious actors.

Now, federal officials are asking for public comment on the potential impacts of AI, as algorithm-based social media platforms continue to amass power and other companies rapidly develop superfast AI conversation simulators, known as chatbots, that offer humanlike (and frequently incorrect) responses to user queries.

The Department of Commerce's National Telecommunications and Information Administration said Tuesday in a press release about its public comment request: “Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose.” 

A notice dated April 7 from the NTIA laid out the methodology for accepting public comments, along with the rationale for collecting them. It stated:

This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders — that is, to provide assurance — — that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.

Basically, the federal government is saying, “We gotta make sure this stuff isn’t completely destructive to society.”

Here's what the NTIA wants to know: 

  • What kinds of trust and safety testing should AI development companies and their enterprise clients conduct?
  • What kinds of data access is necessary to conduct audits and assessments?
  • How can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountability?
  • What different approaches might be needed in different industry sectors — like employment or health care?

It’s good to see the federal government wading into the AI discussion more deliberately. In the past, I’ve shouted out Democratic lawmakers like Rep. Ted Lieu of California and Sen. Michael Bennet of Colorado for being two lawmakers leading the charge on the need for AI regulation. 

Supreme Court Justice Neil Gorsuch entered the AI debate earlier this year with some thought-provoking questions about the technology's potentially harmful uses.

Let’s hope the NTIA's public comment period is a trend toward reining in some of these AI companies. With AI already being shown to disproportionately target Black people audited by the IRS, and shown to misidentify people with darker skin tones, the real-world impacts of this technology are already wreaking havoc.