Marcus Johnson (00:04):
Hey gang, it's Monday, April 20. Bill Jacob, and this is Welcome to Behind the Numbers, an eMarketer podcast. I'm Marcus. Join me for today's conversation. We have two people we have. Joining us from the UK, a principal analyst on the country living in the sometimes sunny Brighton. It's Senor Bill Fisher.
Bill Fisher (00:22):
Hello, Marcus. Great to be back on the show eventually.
Marcus Johnson (00:26):
Okay. Here's the thing. Bill's mad at me because he has not been on in a long time. I've been trying to have him back on, but Danny and the whole production crew were like, "We don't like him." So it's really their fault if you think about it. I feel like Corina also has something to do with it.
Bill Fisher (00:43):
I'll speak to her.
Marcus Johnson (00:43):
Okay. My fault. Fair enough. I'm also joined by one of our AI and tech experts living in California, Jacob Bourne.
Jacob Bourne (00:51):
Hello, Marcus. Glad to be here.
Marcus Johnson (00:52):
Hey, fella. Jacob also is upset with me for some reason. I can just hear it in his tone. Everyone's mad at me today. I don't know why.
Jacob Bourne (01:00):
I think it's projection market or projection probably.
Marcus Johnson (01:05):
It's going to be a fun episode apparently. We start with Back To The Day.
(01:14):
Okay. So the politest country in the world is... Guess from each.
Bill Fisher (01:19):
This is right. No?
Marcus Johnson (01:22):
Hmm?
Bill Fisher (01:22):
It says no.
Marcus Johnson (01:24):
It should UK.
(01:26):
This is why I brought this to you, Bill, because I'm as equally disappointed as you will soon be. It also is the UK, which it should be. It's not. Jacob?
Jacob Bourne (01:35):
Yeah. Mine first went to the UK, but I don't think it is. I don't think it is. It probably is somewhere, I don't know, somewhere that one wouldn't think like Nepal. I don't know.
Marcus Johnson (01:52):
It is somewhere where you don't think of it immediately, but when you say it, you're like, of course they're first. Japan. According to Remitly, Japan is considered the most polite country in the world. It received over 35% of votes cast globally. So one third of all the votes went to this one place. It's nearly three times more than second place Canada with 13%. The UK in third with 6.2 or as Susie who hosts the retail show insists on pointing out to me, half as much as Canada.
Bill Fisher (02:28):
It makes a lot of sense. So with the World Cup, Soccer World Cup coming up. A little fact where the Japanese fans, they hang around after the match and clean up after themselves. They're that polite.
Jacob Bourne (02:40):
That is very, very polite. That's like beyond polite.
Marcus Johnson (02:45):
Of course. Well, this speaks to Gabrielle Cohen, a visual capitalist who wrote this piece, explains that certain traits associated with local culture no doubt contribute to Japanese people's reputation of politeness, including the value placed on cleanliness, to Bill's point, and punctuality. You're seeing perceptions of politeness can shape everything from tourism experiences to international business relationships. Of travelers, these rankings often influence expectations around etiquette hospitality and day-to-day interactions abroad. I'm still so upset. US is 13th. That's tough. 1.6%.
Jacob Bourne (03:19):
13th isn't bad, actually.
Marcus Johnson (03:21):
It's not terrible, but there was another 25 countries...
Jacob Bourne (03:33):
I would have guessed lower.
Marcus Johnson (03:33):
China fourth, Germany fifth. Anyway, I'll never go over this, but I should never have shared... I don't know why I shared this with Susie. She's the worst. Anyway, today's real topic. AI's influence on brand safety.
(03:47):
All right, gents. So AI slop on YouTube is surging as consumer patience for AI social content slumps, writes our Grace Harmon. She explains that over one in five videos recommended by YouTube's algorithm are AI slop according to a Kapwing analysis of social blade data. One in five. Jacob, I'll start with you first one. What's your main takeaway from Grace's recent article on YouTube's AI slop surge and its impact on brand safety?
Jacob Bourne (04:20):
Yeah, my first reaction to that was that's a lot of slop. The second thing is, it's really the last thing that social media needed. If you think back to the beginning of the AI era, public sentiment about social media was already pretty low. And so now a couple of years in, you have this flood of AI slop. And so I think, unfortunately, it probably means people are less receptive to new content, new channels, maybe even more turned off by social media than they were. I think another concern is it probably ends up shrinking the influx of new social media users, especially if you think about some of the restrictions on minors, about using these platforms that are emerging. So I think despite AI's potential benefits for social media, I think this kind of AI slop really points to a race to the bottom mentality that I think is just a big risk with AI in general. And it seems like, unfortunately, it's inevitable on these types of platforms.
Marcus Johnson (05:24):
Yeah, 20% already is high, and you can imagine it obviously only going onwards and upwards. But at this point that users might use platforms less. There's some data in the piece that points to this, about half of US adults said they would use social platforms less or stop using them all together. Again, it's just what they say, what they do is different. If the amount of AI content in their feeds grew according to Story Radius. Bill, what jumped out to you the most here?
Bill Fisher (05:48):
Apart from the fact that the slop's probably growing faster than brands can keep on top of things. A big catchword for me this year is trust. So as AI takes over and we have all this slop or synthetic content flooding our feeds and influencing the algorithms, trust is going to be impacted. I think trust and quality of content could be really, really important in the coming year or two, because as this type of synthetic content becomes more and more pervasive in people's feeds, the more they're going to begin to distrust the brands that appear around them.
(06:36):
And you've mentioned already the stat from Story Radius about adults using the platforms less if there's an increase in this type of content coming on the platforms. It is going to continue coming on the platforms. As you can see on the screen, some data from Graphite, which is a little bit old in AI terms, but it estimated that around half of all new articles published online were created by AI. That was last year, and that was just written content. So you can imagine where we're at now. An algorithm's optimized for engagement. This slop is geared to hit all the right buttons, pull the right levers, so it appears in people's feeds. And this corresponds with polarizing high attention content because that's how you get the engagement. So that's going to be content that brands don't want to be next to, right?
Jacob Bourne (07:27):
Yeah. Yeah, for sure. And I think that mistrust also, in addition to maybe making people use social media less, also would just make people want to gravitate towards brands they already trust and content they already trust versus being open to new types of digital experiences in general.
Marcus Johnson (07:46):
Yeah. Yeah, that data that Bill just cited from Graphite, half, as you mentioned, half of new English language articles published online have AI generated versus human written. That was 8% three years ago, or two and a half years ago. And a year or two half before that, it was about 2%. So it's gotten out of control really, really.
Bill Fisher (08:11):
That's the pace at which things move in this space, right?
Marcus Johnson (08:15):
Yeah. Yeah. Bill, this data you just cited is from your research on brand safety in 2026. AI and the risk it poses, but also the help it potentially provides. And so we're going to talk a bit about those two sides of the coin or the double-edged sword, however you want to phrase it. Let's start with the way it's hindering first. Bill, we're going to try to put together a couple of ways it's hindering. Bill, I'll start with one from you, way it's hindering advertisers in their brand safety efforts, AI things.
Bill Fisher (08:56):
We've already covered this to a degree in the first bit, but authenticity is becoming a huge issue with this deluge of synthetic media because you've got consumers coming to platforms. And it's a bit weird to come onto a platform and have to think to yourself, is that a real influencer? Because you're having synthetic influencers, they're not even real people, and you're having to think, is this real? Which is a bit of an issue. But the problem that that creates for brands is that the more this kind of inauthentic content is proliferating on these platforms, how do they go about traversing this and ensuring that they're showing up in the right places, not next to utter nonsense really. And some of it is still pretty bad at the moment. It's still reasonably easy to spot some of it, but some of it not so much.
Jacob Bourne (10:05):
Yeah.
Marcus Johnson (10:06):
I can't remember if it was you or Grace who wrote this as I was preparing for this episode, but saying it takes you out of the moment. To have to be a policeman or just an auditor just to have to every single thing that you see to be like, "Is this real? How much AI was used?" And it really does pull you out of the experience, which can't be good.
Jacob Bourne (10:29):
Yeah, for sure.
Marcus Johnson (10:30):
Sure. Jacob, please.
Jacob Bourne (10:33):
Yeah. So I think I totally agree with what Bill said. I think another challenge for brand safety is part of the reason why we have this AI slop is because AI is really good at creating massive amounts of content at scale. And so you can use that in the ad creation process too. You can make a bunch of different versions of an ad that's really personalized. But part of that means that it's harder to vet each of those variations. And whereas in the past, you'd have your legal review teams review every ad before it went out.
(11:06):
But now, with all these thousands, potentially, variations, you can't really do that. And on top of that, a lot of these AI tools for content creation really optimize for performance like clicks and conversion rates, and that can really push the AI towards exaggerated claims, a sense of urgency in messaging that might be outside of compliance and also just pose general brand safety risks. So I think AI is really powerful for content creation, but it also introduces these more operational and technical issues on the compliance end, which means that messages that aren't perfectly brand safe might end up slipping through the cracks.
Bill Fisher (11:55):
Yeah. And I think to build on what you said there, Jacob, we've spoken about how the algorithms are primed for engagement. And so it pushes this polarized content. And then what you get is you get a great deal of misinformation, certainly on the social platforms. And this is a huge issue that brands are concerned about. So as you can see from the chart from Activate Consulting, more than a third of GenAI users site accuracy as their primary concern with AI outputs.
Jacob Bourne (12:34):
The other thing I think I would just add is I think there's maybe a very subtle brand safety risk at play here. When you have AI slop that maybe people are turned off by, social media users might be looking at potentially AI generated ads alongside that slop, and they may or may not be aware of it, but it could create negative unconscious associations with that brand over time. And so I think that's also, it's a hindrance that could easily slip under the radar because it's not always obvious that what people's sentiment is around ads that they don't know is AI generated, but it could fuel a negative association.
Bill Fisher (13:21):
Yeah, that's a really good point. And a really high profile example of that is the Coke Christmas ads, which for the second year used AI. And in testing, it tested quite well. But then as soon as it launched last year before Christmas, before holiday period, huge negative sentiment. I'm still not sure quite how that's manifesting. Is it really hurting Coke's bottom line? They've done it for two years running, so maybe it isn't.
Jacob Bourne (13:51):
Yeah. With a product and brand like Coke, it's hard to say that the impact would be immediate. But at least especially from one ad, but if they continue to run ads like that, then I think over time it potentially could erode the consumer sentiment around the brand.
Marcus Johnson (14:11):
Yeah. Probably a lot of this stuff is. You see backlash, right Bill? And then so you're assuming that a lot of people are upset about it and then it goes away and you think to yourself to what you just asked, has it really affected them at all? But I think a lot of this is just long-term. A lot of people have very warm, fuzzy feelings about that ad and about the Coke brand. And if that does erode over time, then you might start to see it down the road. There was a really interesting stat from Grace's piece. 51% of people think AI-generated videos are worse at emotional storytelling quality than human created content. And I thought that was a really interesting note because yes, you can do it faster. Yes, you can add all these things with AI, but in terms of one human telling a story to another, it doesn't hit the same and it might never.
Jacob Bourne (15:00):
Yeah. And I think the Coke ad is notable because the backlash was from people who are aware that it was AI generated. But I think people aren't always aware that they're viewing AI generated ads. And if the AI isn't as good as that emotional storytelling and connecting with consumers, then it could just be ineffective as well as being a brand safety risk. So I think that's the other thing.
Marcus Johnson (15:29):
Yeah. Bill, anything else in terms of how AI is hindering advertisers and their brand safety efforts? We've had authenticity, we've had just the amount of content that people are having to wade through to try to figure out what's what.
Bill Fisher (15:49):
There are issues around platform governance. I mean, this is a tangential issue. It's not necessarily a direct problem with AI, but it's a direct problem that the platforms are having with AI in terms of how they govern its use on the platforms. And there's been some high profile missteps in this regard. X and Grok, it was earlier this year, wasn't it? I think when the European Commission opened an investigation into it because of circulation of indecent images of kids, I think. How is X governing it? Not very well by the looks of things.
(16:39):
And then there are other examples that are maybe slightly less obvious. But YouTube recently relaxed some of the rules around running ads next to sensitive content. So they have a list of content types that ads can't run next to. But they now allow ads to run next to content that contains some of these categories. I think self-harm and sexual abuse is on there, incredibly, so long as the content is dramatized or something like that. But again, it's governing it a different way than other platforms are, which means that on that platform, there's more risk associated with it.
Jacob Bourne (17:28):
Yeah, definitely. It adds another layer for brands to have to worry about in terms of, because they can't control the platform governance.
Marcus Johnson (17:39):
You had a note in your research, Bill, that some of the platforms are putting the control in consumer's hands. I think it was TikTok and Pinterest, I think, in terms of turning up the volume on AI or turning it down. Do you think that's going to be a longer-term strategy with some of the other major platforms or do you think that's going to fizzle out as a strategy?
Bill Fisher (18:02):
We might come onto this a little bit later in the conversation when we're rounding up, but I don't think platforms can get away with that. I don't think they should be able to get away with that. The owners should be on them to regulate themselves because a lot of these platforms aren't regulated properly. And when we talk about trust and quality, we know that if you buy an ad in the UK on ITV, the major commercial TV station, you know that it's a brand safe environment. You know that it's not going to show up next to sexualized content. It might cost a little bit more, but you know that it's brand safe. You just need a login somewhere else or not even a login on some of these social platforms, you don't know where the hell this content's going to show up.
(18:52):
It shouldn't be on the consumers. But whilst these platforms aren't regulated properly, and this is a whole other episode on how we regulate them, that's why brand safety is an issue. There was a really good piece in the media Leader recently. It's a UK publication from Omar Oakes, he's their ex editor, and he likened the brand safety debate as deciding which are the best cigarettes to smoke. Because it shouldn't be a conversation because if everything was regulated properly, you know that it's bad to have content that uses sexual imagery of children, for Christ's sake. It's pretty obvious, but it gets through on these platforms because they're not regulated properly.
Marcus Johnson (19:43):
We talked a lot about the fires that they've started, AI that is, in terms of advertisers trying to figure out brand safety. We're now going to talk a bit about how they're helping to put out those very same fires. Jacob, I'll start with you. What was one way that AI is helping advertisers for brand safety efforts?
Jacob Bourne (20:05):
Yeah, I think it comes down to just how you're using it, because AI certainly does have benefits if you're using it for its strengths. And one of its strengths is contextual analysis. So for targeting, brands can really use AI to analyze where are the safe environments to place an ad. Beyond just keyword matching, I think AI has gotten very powerful for really understanding the full context of whether it's a streaming video or a social media feed or a static webpage really determining and detecting where the high risk and low risk environments are. And just in terms of placing an ad alongside content where it really just fits contextually. So I think that's one of the strongest areas for AI use.
Bill Fisher (21:02):
I think that's bang on. Absolutely correct. I think you mentioned earlier, Jacob, AI is really good at producing content at scale. It's really good at contextualizing and recognizing at scale as well. And that's the difference, right? Humans obviously can't filter all this stuff. Machines could, but it was pretty rudimentary, as you mentioned, like keyword blocking stuff.
(21:33):
The example I use in the report is the word shoot or shooting. In the old days, you'd just block that. That'd be on your blacklist. But if you want to be next to a piece of sports content about basketball, there's going to have shoot and shooting in it. You don't want to block against that. So that's what AI is good at. It can analyze tone and sentiment and narrative context and allow advertisers to distinguish whether it's okay to position a brand next to some content.
Jacob Bourne (22:05):
Yeah. And just on that point about the scale and power of AI, I think also from a brand safety perspective, AI can vet environments for going through reams of social media posts, comments, reviews to really determine if there are concerns out there about a brand or just really evaluating consumer sentiment. And I think using that data can really be beneficial for brands to try and just be in line with what consumer's expectations are around brand safety.
Marcus Johnson (22:42):
Yeah. Bill, how else is AI helping with this?
Bill Fisher (22:46):
It's in the same line of thinking, but it's, again, the ability to do things at scale. So we've talked about content accuracy and misinformation. So it introduces a lot of that, but again, it's really good at potentially spotting it and dealing with it. So there are a number of different solutions out there. It's a little bit of a minefield, but there are automated fact-checking solutions. There are source credibility scoring solutions, so tools that give content a trust score or a bias detection score that can flag misinformation. And again, it can do it in real-time as well, which is quite useful.
Jacob Bourne (23:38):
Yeah.
Marcus Johnson (23:40):
So that's a couple. Bill, anything else in terms of how AI is helping advertisers with brand safety?
Bill Fisher (23:47):
Yeah. So I think we've covered a couple of things from both sides, but the thing we haven't yet covered on this side of the equation that we've spoken about in terms of issues is the solutions around authenticity. We have AI driving a significant rise in synthetic media, but it's also enabling a bunch of tools to detect, trace, and validate it. So in my report, I think I namedrop a couple of examples. Google's SynthID is one of them, Microsoft Video Authenticator. So there are a number of ways that it does it, but by watermarking, AI generated text and images so that then machines can recognize whether it's real content or AI generated, that's one way it's doing that.
Jacob Bourne (24:40):
AI combating AI essentially. It's one area where I think it'll be really interesting to see how that plays out because part of the point of having these powerful AI tools is that you can't tell. That's the allure of using it.
Marcus Johnson (24:54):
So piggybacking off of that term, AI combating AI, I'm going to spring this question on you guys, but we talked about helping, we're talking about hindering. I'm wondering what the shares are though. Is it 60% hindering, 40% helping? And so overall it's still going in the wrong direction?
Jacob Bourne (25:14):
Yeah.
Marcus Johnson (25:14):
Where would you guys land on that? Jacob, I'll let you go first.
Jacob Bourne (25:17):
Yeah, that's an interesting question. I think because we're still so relatively early in this AI era that we're seeing the harms play out and then the solutions emerge. So I think over time, we're probably more at 60 hindering, 40 helping. But I think over time, hopefully, assuming that the solutions can keep up with the power of AI and the problems that poses, that I think that that should be flipped. At least in theory, it could be flipped in the other direction as we see these harms play out and there's more pressure to do something about them.
Marcus Johnson (25:56):
Okay. 60/40 hindering helping. Bill?
Bill Fisher (25:58):
Yeah, I agree. I think we're in the test and fail phase of AI because it's moving along more quickly, as I mentioned right at the top of the episode, than we can keep up. So folks are really keen on using AI. They're using it and then just waiting for the fallout and then trying to deal with it. But some of these solutions that these big companies are being forced to put into play, I think is positive. So I agree with Jacob that this, hopefully, fingers crossed, will flip at some point.
Marcus Johnson (26:33):
Yeah. Let's end with some takeaways, some things to do right now for marketers as it relates to using or maybe avoiding AI in certain contexts when it comes to brand safety. Bill, what's one recommendation for marketers right now?
Bill Fisher (26:51):
I could get on my soapbox here and talk about how brands should be a little bit more responsible and not enter this race to the bottom and just put all this poor advertising next to poor content and go to platforms that are properly regulated that you know are brand safe. But I think that isn't an option for many people. In the digital space, brand safety is going to be an issue. So if you were to push me for one thing for marketers, I would still say take responsibility. So that's the key thing. So all the tools we've mentioned, try and use all the tools that are available. Look for the quality content if you want quality placements.
(27:40):
I would say that's the big thing. And I would say as well, sorry, this is three. Am I allowed three? I'm going to give you three. And where you think the risk is potentially the greatest, you use human oversight.
Marcus Johnson (27:56):
Yeah, I like that one.
Bill Fisher (27:56):
We're still the best at spotting these things, I think. Not at scale. But where you have a lot of value riding on it, use humans.
Marcus Johnson (28:07):
Yeah. I thought by we're the best, you meant British people. We're not the politest apparently.
Bill Fisher (28:16):
No, the Japanese are the best.
Marcus Johnson (28:18):
Yeah, they are. I like that one. Introduce human oversight where risk is highest. That's a really good one from your research. Jacob, what have you got for us?
Jacob Bourne (28:26):
Yeah. I'll second the human over or third the human oversight point from Bill as well. But yeah, I think my message is more bigger picture. I think marketers should just remember that the heart of marketing is really connecting and communicating effectively with people. And so amid this pressure to use AI and adopt AI for everything, I think it's really more than just to use AI or not to use AI. I think the question is really, how can you continue to meaningfully connect with your consumer base in the age of AI?
(28:58):
And part of that could be adopting AI for some of the beneficial ways that we mentioned. But I think other aspects is just about how to maintain that human influence, that human storytelling that I think is so crucial, especially as we see more and more brands be vocal about saying that we're not using AI because we want to focus on things that are human made and human driven.
(29:25):
And so I think that is really becoming a premium messaging in the AI era, and it's really important. And I think it'll continue to be important just in terms of, again, meaningfully connecting with consumers. And I think it really ultimately comes down to a mix between effectively keeping people in the loop while also using AI for what it's best at.
Marcus Johnson (29:50):
Yeah. Perfect place and way to land the plane. That's all we've got time for for today's episode. Bill's full report on this is called Brand Safety 2026. AI is multiplying risk and helping manage it. Link is in the show notes, of course, Pro Plus subscribers can head to emarketer.com and get it there as well. That's all we have time for. Thank you so much to my guests. Thank you to Bill.
Bill Fisher (30:10):
Thanks for having me, Marcus, eventually.
Marcus Johnson (30:11):
Indeed. I'll see you in 2028. Thank you to Jacob.
Jacob Bourne (30:14):
It's been a pleasure to be here, Marcus.
Marcus Johnson (30:17):
I'll see you next week. Thank you so much to the production crew. We've got Luigi and Lawrence helping us out with this one. Thanks to everyone for listening to Behind the Numbers eMarketer podcast. Watch upcoming episodes of our video podcast on YouTube, Spotify, and coming this spring on Apple Podcast. Susie will have the retail show for you this Wednesday talking to Blake and Skye all about what's working and not working with returns.
Bill Fisher (30:39):
Hello, Marcus.
(30:45):
See, he's ignoring me.
Jacob Bourne (30:48):
Hello. Hello, Marcus.
Marcus Johnson (30:50):
You're talking to me. I just heard the last bit of that. Can you guys hear me?
Jacob Bourne (30:55):
Yeah, we can hear you.
Bill Fisher (30:57):
I'm here to confront you, Marcus.
Marcus Johnson (30:59):
Here we go.
Bill Fisher (31:01):
15 months it's been. 15 months.
Marcus Johnson (31:04):
No way. Has it really?
Bill Fisher (31:07):
Yes, way. It has. Yeah.
Marcus Johnson (31:11):
If I told you you'd been on more recently than... Paul Briggs, I guess, was on just recently, but before that it'd been a while. Does that make you feel better?
Bill Fisher (31:19):
No.
Marcus Johnson (31:22):
Okay, good. What if I told you that Quina's been on a few times, but she's been terrible.
Bill Fisher (31:27):
That makes me feel better. That definitely makes-
Jacob Bourne (31:29):
She's not here to defend herself, Marcus.
Marcus Johnson (31:31):
Exactly. What's up, Jacob? Hey, Luigi.