The news: A CNN investigation found several fraudulent Spotify podcasts using AI-generated voices to market illegal online drugstores selling prescription medications without medical authorization, breaking US law.
The bigger picture: The podcasts are raising questions about current content moderation capabilities as AI innovates faster than platform oversight mechanisms can evolve. AI makes it easier to mass-produce harmful content across platforms that aren’t yet equipped to immediately detect it—and this risk goes beyond the platforms hosting harmful material.
Brands advertising across these platforms face reputational risks when ads appear alongside or within harmful AI-generated content. An Adalytics report contended that current brand safety tools are insufficient for protecting brands, claiming that ads for major brands are often displayed alongside inappropriate content because of tools introduced to market before being fully developed and verified.
Our take: As AI matures, platform accountability will increasingly separate leaders from the rest. Advertisers may begin favoring platforms with clearer transparency, real-time moderation insights, and rapid response mechanisms for AI-related incidents. And as risks rise, brands could pivot from scale-focused programmatic buys to curated environments and premium inventory where content is more tightly controlled.
As AI makes moderation and vetting exponentially harder, advertisers will demand transparency and safeguards from platforms—and ensure they understand moderation processes—before risking brand exposure through ad spend.
You've read 0 of 2 free articles this month.
One Liberty Plaza9th FloorNew York, NY 100061-800-405-0844
1-800-405-0844sales@emarketer.com