Below you will find pages that utilize the taxonomy term “editors-pick”
September 16, 2024
Disability, Accessibility, and AI
A discussion of how AI can help and harm people with disabilities
I recently read a September 4th thread on Bluesky by Dr. Johnathan Flowers of American University about the dustup that occurred when organizers of NaNoWriMo put out a statement saying that they approved of people using generative AI such as LLM chatbots as part of this year’s event.
“Like, art is often the ONE PLACE where misfitting between the disabled bodymind and the world can be overcome without relying on ablebodied generosity or engaging in forced intimacy.
August 1, 2024
Economics of Generative AI
The Economics of Generative AI What’s the business model for generative AI, given what we know today about the technology and the market?
OpenAl has built one of the fastest-growing businesses in history. It may also be one of the costliest to run.
The ChatGPT maker could lose as much as $5 billion this year, according to an analysis by The Information, based on previously undisclosed internal financial data and people involved in the business.
June 4, 2024
The Meaning of Explainability for AI
Do we still care about how our machine learning does what it does?
Today I want to get a bit philosophical and talk about how explainability and risk intersect in machine learning.
What do we mean by Explainability?
In short, explainability in machine learning is the idea that you could explain to a human user (not necessarily a technically savvy one) how a model is making its decisions. A decision tree is an example of an easily explainable (sometimes called “white box”) model, where you can point to “The model divides the data between houses whose acreage is more than one or less than or equal to one” and so on.
May 2, 2024
Environmental Implications of the AI Boom
The digital world can’t exist without the natural resources to run it. What are the costs of the tech we’re using to build and run AI?
There’s a core concept in machine learning that I often tell laypeople about to help clarify the philosophy behind what I do. That concept is the idea that the world changes around every machine learning model, often because of the model, so the world the model is trying to emulate and predict is always in the past, never the present or the future.
April 1, 2024
The Coming Copyright Reckoning for Generative AI
Courts are preparing to decide whether generative AI violates copyright—let’s talk about what that really means
Copyright law in America is a complicated thing. Those of us who are not lawyers understandably find it difficult to suss out what it really means, and what it does and doesn’t protect. Data scientists don’t spend a lot of time thinking about copyright, unless we’re choosing a license for our open source projects.
March 14, 2024
Uncovering the EU AI Act
The EU has moved to regulate machine learning. What does this new law mean for data scientists?
The EU AI Act just passed the European Parliament . You might think, “I’m not in the EU, whatever,” but trust me, this is actually more important to data scientists and individuals around the world than you might think. The EU AI Act is a major move to regulate and manage the use of certain machine learning models in the EU or that affect EU citizens, and it contains some strict rules and serious penalties for violation.
March 2, 2024
Seeing Our Reflection in LLMs
When LLMs give us outputs that reveal flaws in human society, can we choose to listen to what they tell us?
Machine Learning, Nudged
By now, I’m sure most of you have heard the news about Google’s new LLM*, Gemini, generating pictures of racially diverse people in Nazi uniforms . This little news blip reminded me of something that I’ve been meaning to discuss, which is when models have blind spots, so we apply expert rules to the predictions they generate to avoid returning something wildly outlandish to the user.
November 30, 2023
What Role Should AI Play in Healthcare?
On the use of machine learning in healthcare and the United Healthcare AI scandal
Some of you may know that I am a sociologist by training — to be exact, I studied medical sociology in graduate school. This means I focused on how people and groups interact with illness, medicine, healthcare institutions, and concepts and ideas around health.*
I taught undergraduates going into healthcare fields about these issues while I was an adjunct professor, and I think it’s really important for people who become our healthcare providers to have insight into the ways our social, economic, and racial statuses interact with our health.
November 15, 2023
Detecting Generative AI Content
On deepfakes, authenticity, and the President’s Executive Order on AI
One of the many interesting ethical issues that comes with the advances of generative AI is detection of the product of models . It’s a practical issue as well, for those of us who consume media. Is this thing I am reading or looking at the product of a person’s thoughtful work, or just words or images probabilistically generated to appeal to me?
October 31, 2023
How Human Labor Enables Machine Learning
Much of the division between technology and human activity is artificial — how do people make our work possible?
We don’t talk enough about how much manual, human work we rely upon to make the exciting advances in ML possible. The truth is, the division between technology and human activity is artificial. All the inputs that make models are the result of human effort, and all the outputs in one way or another exist to have an impact on people.
October 3, 2023
Is Generative AI Taking Over the World?
Businesses are jumping on a bandwagon of creating something, anything that they can launch as a “Generative AI” feature or product. What’s driving this, and why is it a problem?
The AI Hype Cycle: In a Race to Somewhere?
I was recently catching up on back issues of Money Stuff, Matt Levine’s indispensable newsletter/blog at Bloomberg, and there was an interesting piece about how AI stock picking algorithms don’t actually favor AI stocks (and also they don’t perform all that well on the picks they do make).
September 17, 2023
What Does It Mean When Machine Learning Makes a Mistake?
Do our definitions of “mistake” make sense when it comes to ML/AI? If not, why not?
A comment on my recent post about the public perception of machine learning got me thinking about the meaning of error in machine learning. The reader asked if I thought machine learning models would always “make mistakes”. As I described in that post, people have a strong tendency to anthropomorphize machine learning models. When we interact with an LLM chatbot, we apply techniques to those engagements that we have learned by communicating with other people—persuasion, phrasing, argument, etc.
August 23, 2023
Archetypes of the Data Scientist Role
Data science roles can be very different, and job postings are not always clear. What hat do you want to wear?
After the positive responses to my recent post in Towards Data Science about Machine Learning Engineers , I thought I would write a bit about what I think are the real categories of roles for data science practitioners in the job market. While I was previously talking about the candidates, e.