IE 11 is not supported. For an optimal experience visit our site on another browser.

The creepy deepfake 'Biden' robocalls are a harbinger of high-tech voter suppression

Deepfake robocalls, a law to protect children used as social media influencers and Justice Elena Kagan's AI issues — these are the top tech stories of the week.

By

My friends, happy Tuesday. Here's your Tuesday Tech Drop, the top news stories in politics over the last week. Check it out!

Biden robocalls foretell a frightening future

New Hampshire’s attorney general sounded the alarm on Monday about some residents who received robocalls that imitate President Joe Biden's voice, in which the caller discourages people from voting in the state's Democratic primary. The source of the robocalls is uncertain — though I’ll note here that dubious robocalls have been used by partisans in the past to suppress votes.

The latest story about the Biden robocall shows the urgent need for federal and local officials to develop measures to curb malicious uses of artificial intelligence before the problem gets too widespread to solve.

Read more at NBC News.

FEC taking its sweet time

The nonprofit consumer advocacy group Public Citizen, which has pushed federal regulators to take action to curb the use of artificial intelligence in political ads, says the Federal Election Commission is “slow-walking the issue.” The criticism follows a statement from Republican FEC Chairman Sean Cooksey that the commission will consider whether to establish rules around deepfakes in political ads by “early summer.” Cooksey told The Washington Post that “any suggestion that the FEC is not working on the pending AI rulemaking petition is false.”

Read more at The Washington Post.

Meta accusations

A newly unredacted court filing in a lawsuit brought against Meta by the state of New Mexico alleges the company, which owns Instagram and Facebook, estimated that in 2021 as many as 100,000 children a day experienced sexual harassment on its platforms. The company told CNBC that it has addressed problems identified in the complaint, which it says “mischaracterizes our work using selective quotes and cherry-picked documents.”

Read more at CNBC.

Justice Kagan's AI issues

Supreme Court Justice Elena Kagan voiced concern about the implications for artificial intelligence should the Supreme Court overturn Chevron deference, the long-standing legal principle that empowers federal agencies to interpret laws seen as unclear. Kagan argued last week that overturning the doctrine would take regulatory power over issues like artificial intelligence out of the hands of experts and put them in the hands of federal judges. 

“Does the Congress want this court to decide those questions — policy-laden questions — of artificial intelligence?” she asked. 

I’m going to guess “no.” But that may not stop this court.

Read more at Bloomberg Law.

The pope’s AI Guy

The soaring architecture and Renaissance art and its status as the seat of a centuries-old religion may give you the impression the Vatican is entirely old-school. But did you know the Vatican has a point person leading its artificial intelligence policy? The Associated Press has an interesting writeup on Friar Paolo Benanti, whose job at the Vatican is to understand “how to govern artificial intelligence so that it enriches — and doesn’t exploit — people’s lives.”

Read more at The Associated Press

New ‘kidfluencer’ bill

A new bill proposed in Ohio seeks to guarantee “kidfluencers” — that is, children who appear in viral social media videos posted by family members — aren’t financially exploited and that they get a share of any money made. The Ohio Capital Journal reports that “H.B. 376 would require adult vloggers who feature minors in their content to set aside a percentage of the money made per year.”

Read more at the Ohio Capital Journal. 

Facial recognition faceoff

A group of Democratic senators sent a letter to the Justice Department seeking answers about whether the department’s use of facial recognition technology violates people’s civil rights. 

“We are deeply concerned that facial recognition technology may reinforce racial bias in our criminal justice system and contribute to arrests based on faulty evidence,” the senators wrote. They want to know whether the Justice Department’s facial recognition tools abide by civil rights standards, whether the department has policies to curb misuse of those tools and whether it has studied the disparate impacts of facial recognition technology on marginalized groups.

Read more at The Hill.

Phillips goes robotic

OpenAI, the artificial intelligence company behind ChatGPT, suspended the developer account of AI firm Delphi from its platform after Delphi created a chatbot modeled after a long-shot Democratic presidential candidate, Rep. Dean Phillips. The bot was backed by two Silicon Valley tech bros who are funding a Phillips-aligned super PAC.

Read more at The Guardian.