IE 11 is not supported. For an optimal experience visit our site on another browser.

Feds say AI can be used to discriminate, and they're 100% right

It sounds as though the government is finally recognizing just how unwieldy and dystopian the AI industry could become.

By

A group of federal agencies issued a joint statement Tuesday assuring Americans they are prepared to regulate the fast-moving artificial intelligence industry. 

I'm not breathing a sigh of relief just yet, but the government's attention to this matter is, at least, encouraging.

With the outsize power of algorithm-driven social media platforms and the rapid development of AI computer programs known as chatbots, many observersme included — have been urging the federal government to play a larger role in restraining the AI industry before it gets too unwieldy and, frankly, dystopian. 

Signed by leaders from the Justice Department, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission and the Federal Trade Commission, Tuesday’s statement is essentially meant to inform the public that federal agencies are mindful of the improvements AI technology can enable and equally aware of the harm this technology can cause. 

“We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws," the leaders wrote, "and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”

Photo illustration of a blurry and pixelated person with digitized boxes selecting features of her face.
MSNBC; Getty Images

It’s a timely statement given the abundance of news stories about unsettling AI uses these days, from IRS algorithms that appear to target Black people for audits disproportionately to algorithm-reliant housing organizations’ facilitating housing discrimination to AI-powered facial recognition and biometric surveillance tools that disproportionately target Black and brown people.

The topline version of the agencies’ letter? “We’re on this, y’all.” 

The FTC, for example, reminded companies it may be illegal for them “​​to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks.”

The EEOC reminded people that the “Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees.”

That’s noteworthy given the abundance of stories about algorithm-based hiring discrimination and the Justice Department’s guidance last year about the potential for algorithm-based application processes to discriminate against people with disabilities. 

CFPB, which seeks to protect consumers from predatory financial institutions, said federal laws around lending and credit remain intact with regard to AI tools, adding that “the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”

And the Justice Department’s Civil Rights Division noted its purview includes law enforcement in several arenas that stand to be influenced by AI, “including in education, the criminal justice system, employment, housing, lending, and voting.”

The AI industry is undoubtedly running wild. This letter is an indication of the government’s efforts to rein it in.