The best advice I can give you for 2023 is to familiarize yourself with the concept of “artificial intelligence” and its impact on our everyday lives.
Today, so-called machine learning and AI factor prominently in daily activities. And in the wrong hands, this technology can wreak havoc on society.
To me, social media platforms have been the clearest example of this. Twitter and Facebook feeds, powered by artificial intelligence and controlled by some of the world’s wealthiest people, have bombarded users with politically opportunistic conspiracy theories, misinformation, and hatred.
These days, I tend to see the misery evoked by social platforms as an opportunity to highlight the potential dangers of artificial intelligence.
But the trouble with AI isn’t limited to social media. Law enforcement agencies use artificial intelligence to “predict” (read: assume) where crimes will occur and who will commit them. Employers use AI to help them determine who they think will be the best fit for their workplace. And medical professionals use AI to help them make diagnoses and prescribe remedies.
But because this AI is built by humans who carry biases, the output can be pretty biased as well, meaning the pitfalls of AI creations often fall heaviest on marginalized people. And the hunger for AI only seems to be growing, with one expert warning of a premature and “shocking” rollout of AI technology in 2023.
This fear about hastiness with AI is the crux of a 2018 paper by tech specialists Timnit Gebru and Joy Buolamwini. The paper, titled “Gender Shades,” focuses on how artificial intelligence such as facial recognition software is often ill-equipped to recognize Black people — particularly, Black women. And in a world increasingly reliant on AI, machine failures like the ones described by Gebru and Buolamwini will have downstream impacts on who is (or isn’t) hired and who is (or isn’t) arrested.
Personally, I’ve looked to Black and brown techies for guidance on this. Routinely, they’re the ones most attuned to the shortcomings of AI, and the most invested in fixing them. Buolamwini and Gebru have both founded organizations focused on ethical AI. Follow them! (Here’s a link to Buolamwini’s Algorithmic Justice League, and one for Gebru’s Distributed AI Research Institute.)
ReidOut Blog readers may recognize Gebru, who was ousted from Google after raising issues about AI bias, from my previous citations of her work. I love how she thinks and talks about tech and artificial intelligence. I share her excitement about AI’s possibilities, and her fears about its misuse.
Earlier this year, I stumbled upon this lecture of hers, hosted by Harvard’s Radcliffe Institute. It’s titled “The Quest for Ethical Artificial Intelligence,” and I highly recommend it. The quest for ethical AI is a worthy one. An essential one, even.
Choose your leaders wisely.
Check out the lecture below.