Below you will find pages that utilize the taxonomy term “bias-in-ai”
March 2, 2024
Seeing Our Reflection in LLMs
When LLMs give us outputs that reveal flaws in human society, can we choose to listen to what they tell us?
Machine Learning, Nudged
By now, I’m sure most of you have heard the news about Google’s new LLM*, Gemini, generating pictures of racially diverse people in Nazi uniforms . This little news blip reminded me of something that I’ve been meaning to discuss, which is when models have blind spots, so we apply expert rules to the predictions they generate to avoid returning something wildly outlandish to the user.