The news: The Interactive Advertising Bureau (IAB) released its first AI Transparency and Disclosure Framework. It aims to curb deceptive uses of genAI in advertising and protect consumer trust as synthetic voices, images, and digital doubles make ads easier to fake.
Why it matters: The urgency for AI disclosure in ads is rising because media buyers are already leaning in.
The challenge: The IAB’s framework, which is voluntary, only works when agencies choose to adopt it. Without platform mandates, audits, or penalties, AI-use disclosures can become inconsistent—applied by cautious brands but ignored by everyone else.
Even if disclosure doesn’t hurt purchase intent, research from the University of Gothenburg in Sweden finds “Made with AI” labels can reduce emotional engagement and perceived authenticity, adding friction and weakening response.
Implications for advertisers: Voluntary AI disclosure makes it safer to scale AI creative across channels, so synthetic assets don’t become a trust problem later.
IAB’s framework is a critical first step, but voluntary standards don’t become the norm until agencies operationalize them. Until then, AI disclosure will be uneven—careful brands will comply, while others free-ride on ambiguity.
You've read 0 of 2 free articles this month.
One Liberty Plaza9th FloorNew York, NY 100061-800-405-0844
1-800-405-0844sales@emarketer.com