The people arriving at “Willy’s Chocolate Experience” at a warehouse in Glasgow, Scotland, this February were expecting an immersive event, an experience that would transport them to a “magical realm,” filled with colorful, larger-than-life lollipops.
At least that was what images of the event, created by generative AI, promised. What attendees got instead was a sparsely decorated warehouse that saw kids cry and led some attendees to call the police, believing they had been scammed.
The Glasgow event is a prime example of “when you promise the world with these AI images, and it completely underdelivers,” Audrey Chee-Read, a principal analyst at Forrester, said during a presentation at Forrester’s CX Summit Tuesday in Nashville, Tennessee.
Chee-Read, who studies consumer behavior, said that consumer trust in generative AI is exceedingly low and it’s up to brands to bring their customers along — or risk losing the trust they hold.
“With the proliferation of AI, there is a halo of consumer skepticism,” Chee-Read said. “When it comes to AI, consumer distrust is the default.”
The data backs that up. Only a quarter of consumers trust information provided by generative AI, according to Forrester’s research. Three-quarters of people believe companies should disclose when they are using generative AI. And less than a quarter feel comfortable giving up personal information to generative AI tools.
How brands can build trust in generative AI
On a basic level, brands need to make sure that the output meets customers’ expectations. Overpromising and under delivering is a recipe for disappointment and distrust.
CX leaders need to maintain human oversight, analysts urged.
“Some organizations will fly too high to fast,” said J.P. Gownder, VP and principal analyst at Forrester, during a presentation earlier in the day. Responsible generative AI use keeps a human in the loop.
Chee-Read offered the example of UberEats using generative AI to produce an image of a fruit pie for a restaurant selling a pie. The problem? The company was selling a pizza pie, not a dessert pie, and customers who were eager to get themselves a dessert pie found themselves with pizza instead.
When use of generative AI goes wrong — whether from overpromising or lack of oversight — trust is the first thing to go.
“The first thing that we always see when it comes to backlash is that you lose the credibility that you have built and you have been building,” Chee-Read told CX Dive. “It is very, very easy to completely lose the years of work that a brand might have spent to build that credibility and not to be taken as a joke.”
The company that put on the Willy Wonka experience, for example, lost the trust of its customers, and turned into the butt of a joke across the internet, she said. But such errors in judgment can also stifle experimentation and innovation for years to come.
“When a company then tries to bounce back and share new innovation, they can't be taken seriously, and there is more work to be done when they're trying to come back,” Chee-Read said.
Customers want to know when generative AI is used. For Chee-Read, the answer is simple: “disclose, disclose, disclose.”
“You need to not be afraid to share when AI is being used,” Chee-Read said. “If you’re gun shy to tell people, then you need to scrutinize why and whether you should be using it.”