The Cultural Impact of AI Generated Content: Part 1
What happens when AI generated media becomes ubiquitous in our lives? How does this relate to what we’ve experienced before, and how does it change us?
This is the first part of a two part series I’m writing analyzing how people and communities are affected by the expansion of AI generated content. I’ve already talked at some length about the environmental , economic , and labor issues involved, as well as discrimination and social bias . But this time I want to dig in a little and focus on some psychological and social impacts from the AI generated media and content we consume, specifically on our relationship to critical thinking, learning, and conceptualizing knowledge.
History
Hoaxes have been perpetrated using photography essentially since its invention . The moment we started having a form of media that was believed to show us true, unmediated reality of phenomena and events, was the moment that people started coming up with ways to manipulate that form of media, to great artistic and philosophical effect. (As well as humorous or simply fraudulent effect.) We have a form of unwarranted trust in photographs, despite this, and we have developed a relationship with the form that balances between trust and skepticism.
When I was a child, the internet was not yet broadly available to the general public, and certainly very few homes had access to it, but by the time I was a teenager that had completely changed, and everyone I knew spent time on AOL instant messenger. Around the time I left graduate school, the iPhone was launched and the smartphone era started. I retell all this to make the point that cultural creation and consumption changed startlingly quickly and beyond recognition in just a couple of decades.
I think the current moment represents a whole new era specifically in the media and cultural content we consume and create, because of the launch of generative AI. It’s a little like when Photoshop became broadly available, and we started to realize that photos were sometimes retouched, and we began to question whether we could trust what images looked like. (Readers may find the ongoing conversation around “what is a photograph” an interesting extension of this issue.) But even then, Photoshop was expensive and had a skill level requirement to use it effectively, so most photos we encountered were relatively true to life, and I think people generally expected that images in advertising and film were not going to be “real”. Our expectations and intuitions had to adjust to the changes in technology, and we more or less did.
Current Day
Today, AI content generators have democratized the ability to artificially produce or alter any kind of content, including images. Unfortunately, it’s extremely difficult to get an estimate of how much of the content online may be AI-generated — if you google this question you’ll get references to an article from Europol claiming it says that the number will be 90% by 2026 — but read it and you’ll see that the research paper says nothing of the sort. You might also find a paper by some AWS researchers being cited , saying that 57% is the number — but that’s also a mistaken reading (they’re talking about text content being machine translated, not text generated from whole cloth, to say nothing of images or video). As far as I can tell, there’s no reliable, scientifically based work indicating actually how much of the content we consume may be AI generated — and even if it did, the moment it was published it would be outdated.
But if you think about it, this is perfectly sensible. A huge part of the reason AI generated content keeps coming is because it’s harder than ever before in human history to tell whether a human being actually created what you are looking at, and whether that representation is a reflection of reality. How do you count something, or even estimate a count, when it’s explicitly unclear how you can identify it in the first place?
I think we all have the lived experience of spotting content with questionable provenance. We see images that seem to be in the uncanny valley, or strongly suspect that a product review on a retail site sounds unnaturally positive and generic, and think, that must have been created using generative AI and a bot. Ladies, have you tried to find inspiration pictures for a haircut online recently? In my own personal experience, 50%+ of the pictures on Pinterest or other such sites are clearly AI generated, with tell-tale signs: textureless skin, rubbery features, straps and necklaces disappearing into nowhere, images explicitly not including hands, never showing both ears straight on, etc. These are easy to dismiss, but a large swath makes you question whether you’re seeing heavily filtered real images or wholly AI generated content. I make it my business to understand these things, and I’m often not sure myself. I hear tell that single men on dating apps are so swamped with scamming bots based on generative AI that there’s a name for the way to check — the “Potato Test”. If you ask the bot to say “potato” it will ignore you, but a real human person will likely do it. The small, everyday areas of our lives are being infiltrated by AI content without anything like our consent or approval.
Why?
What’s the point of dumping AI slop in all these online spaces? The best case scenario goal may be to get folks to click through to sites where advertising lives, offering nonsense text and images just convincing enough to get those precious ad impressions and get a few cents from the advertiser. Artificial reviews and images for online products are generated by the truckload, so that drop-shippers and vendors of cheap junk can fool customers into buying something that’s just a little cheaper than all the competition, letting them hope they’re getting a legitimate item. Perhaps the item can be so incredibly cheap that the disappointed buyer will just accept the loss and not go to the trouble of getting their money back.
Worse, bots using LLMs to generate text and images can be used to lure people into scams, and because the only real resource necessary is compute, the scaling of such scams costs pennies — well worth the expense if you can steal even one person’s money every so often. AI generated content is used for criminal abuse, including pig butchering scams , AI-generated CSAM and non-consensual intimate images , which can turn into blackmail schemes as well.
There are also political motivations for AI-generated images , video, and text — in this US election year, entities all across the world with different angles and objectives produced AI-generated images and videos to support their viewpoints, and spewed propagandistic messages via generative AI bots to social media, especially on the former Twitter, where content moderation to prevent abuse, harassment, and bigotry has largely ceased. The expectation from those disseminating this material is that uninformed internet users will absorb their message through continual, repetitive exposure to this content, and for every item they realize is artificial, an unknown number will be accepted as legitimate. Additionally, this material creates an information ecosystem where truth is impossible to define or prove, neutralizing good actors and their attempts to cut through the noise.
A small minority of the AI-generated content online will be actual attempts to create appealing images just for enjoyment, or relatively harmless boilerplate text generated to fill out corporate websites, but as we are all well aware, the internet is rife with scams and get-rich-quick schemers, and the advances of generative AI have brought us into a whole new era for these sectors. (And, these applications have massive negative implications for real creators, energy and the environment, and other issues.)
Where we Are
I’m painting a pretty grim picture of our online ecosystems, I realize. Unfortunately, I think it’s accurate and only getting worse. I’m not arguing that there’s no good use of generative AI, but I’m becoming more and more convinced that the downsides for our society are going to have a larger, more direct, and more harmful impact than the positives.
I think about it this way: We’ve reached a point where it is unclear if we can trust what we see or read, and we routinely can’t know if entities we encounter online are human or AI. What does this do to our reactions to what we encounter? It would be silly to expect our ways of thinking to not change as a result of these experiences, and I worry very much that the change we’re undergoing is not for the better.
The ambiguity is a big part of the challenge, however. It’s not that we know that we’re consuming untrustworthy information, it’s that it’s essentially unknowable. We’re never able to be sure. Critical thinking and critical media consumption habits help, but the expansion of AI generated content may be outstripping our critical capabilities, at least in some cases. This seems to me to have a real implication for our concepts of trust and confidence in information.
In my next article, I’ll discuss in detail what kind of effects this may have on our thoughts and ideas about the world around us, and consider what, if anything, our communities might do about it.
Read more of my work at www.stephaniekirmer.com.
Also, regular readers will know I publish on a two week schedule, but I am moving to a monthly publishing cadence going forward. Thank you for reading, and I look forward to continuing to share my ideas!
Further Reading
https://www.theverge.com/2024/2/2/24059955/samsung-no-such-thing-as-real-photo-ai
https://arxiv.org/pdf/2401.05749 — Brian Thompson, Mehak Preet Dhaliwal, Peter Frisch, Tobias Domhan, and Marcello Federico of AWS
https://www.404media.co/ai-generated-child-sexual-abuse-material-is-not-a-victimless-crime/
https://www.404media.co/fbi-arrests-man-for-generating-ai-child-sexual-abuse-imagery/
Instagram Advertises Nonconsensual AI Nude Apps
https://www.brennancenter.org/our-work/research-reports/generative-ai-political-advertising
The Cultural Impact of AI Generated Content: Part 1 was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.