Below you will find pages that utilize the taxonomy term “ai”
September 20, 2024
Viewing Generations Through Generative AI
I contributed commentary to a research project conducted by AIPort and TuringPost that submitted identical prompts to four different generative AI image generation tools and compared the differences in the results.
Click through to see the whole project, including many of the images.
September 16, 2024
Disability, Accessibility, and AI
A discussion of how AI can help and harm people with disabilities
I recently read a September 4th thread on Bluesky by Dr. Johnathan Flowers of American University about the dustup that occurred when organizers of NaNoWriMo put out a statement saying that they approved of people using generative AI such as LLM chatbots as part of this year’s event.
“Like, art is often the ONE PLACE where misfitting between the disabled bodymind and the world can be overcome without relying on ablebodied generosity or engaging in forced intimacy.
August 1, 2024
Economics of Generative AI
The Economics of Generative AI What’s the business model for generative AI, given what we know today about the technology and the market?
OpenAl has built one of the fastest-growing businesses in history. It may also be one of the costliest to run.
The ChatGPT maker could lose as much as $5 billion this year, according to an analysis by The Information, based on previously undisclosed internal financial data and people involved in the business.
July 4, 2024
Data Privacy in AI: PII versus Personal Information
What kind of information does data privacy law actually cover?
In my continuing series of columns digging deeper into the content of my recent talk at the AI Quality Conference , today I’m going to talk about how we distinguish the kinds of data that are and are not covered by the data privacy laws that are springing up around the US and globally. Different kinds of data are protected more restrictively, depending on the jurisdictions, so this is important to know if you are using data about individuals for analysis or machine learning.
March 14, 2024
Uncovering the EU AI Act
The EU has moved to regulate machine learning. What does this new law mean for data scientists?
The EU AI Act just passed the European Parliament . You might think, “I’m not in the EU, whatever,” but trust me, this is actually more important to data scientists and individuals around the world than you might think. The EU AI Act is a major move to regulate and manage the use of certain machine learning models in the EU or that affect EU citizens, and it contains some strict rules and serious penalties for violation.
March 2, 2024
Seeing Our Reflection in LLMs
When LLMs give us outputs that reveal flaws in human society, can we choose to listen to what they tell us?
Machine Learning, Nudged
By now, I’m sure most of you have heard the news about Google’s new LLM*, Gemini, generating pictures of racially diverse people in Nazi uniforms . This little news blip reminded me of something that I’ve been meaning to discuss, which is when models have blind spots, so we apply expert rules to the predictions they generate to avoid returning something wildly outlandish to the user.
October 3, 2023
Is Generative AI Taking Over the World?
Businesses are jumping on a bandwagon of creating something, anything that they can launch as a “Generative AI” feature or product. What’s driving this, and why is it a problem?
The AI Hype Cycle: In a Race to Somewhere?
I was recently catching up on back issues of Money Stuff, Matt Levine’s indispensable newsletter/blog at Bloomberg, and there was an interesting piece about how AI stock picking algorithms don’t actually favor AI stocks (and also they don’t perform all that well on the picks they do make).
September 17, 2023
What Does It Mean When Machine Learning Makes a Mistake?
Do our definitions of “mistake” make sense when it comes to ML/AI? If not, why not?
A comment on my recent post about the public perception of machine learning got me thinking about the meaning of error in machine learning. The reader asked if I thought machine learning models would always “make mistakes”. As I described in that post, people have a strong tendency to anthropomorphize machine learning models. When we interact with an LLM chatbot, we apply techniques to those engagements that we have learned by communicating with other people—persuasion, phrasing, argument, etc.
July 25, 2023
Thinking Sociologically About Machine Learning
I sometimes mention in my written work and speeches that I have a sociology background, and used to be an adjunct professor of sociology at DePaul University before embarking on my data science career. I loved sociology, and still do — it shaped so much about how I understand the world and my own place in it.
However, when I made a career change and turned to data science, I spent a lot of time explaining how that background, training, and experience were assets to my practice of data science, because it wasn’t obvious to people.
Speaking
Beautiful Bastards Podcast
Topic: Data Science/Machine Learning misc Location: Virtual
Date: October 4, 2021 Links:
https://www.beautifulbastardspodcast.com/the-rise-of-ai-stephanie-kirmer-on-the-future-of-big-tech/ The Chicago SSL: Tribune and Chicago Magazine Machine learning for kids book
Speaking
RE*WORK's Virtual Applied AI Summit - Spring 2020
Topic: Moderator, Panel: Strategies for Effectively Building, Deploying, and Monitoring AI Location: Virtual/London, UK
Date: May 21, 2020 Links: https://www.re-work.co/events/applied-ai-virtual-summit