Dive Brief:
- Consumers are becoming more familiar and comfortable with generative AI, but a healthy dose of skepticism remains, recent research from Deloitte shows. Seven in 10 respondents to Deloitte’s survey of 3,800 consumers said the emergence of generative AI makes it harder for them to trust what they see online.
- Many consumers are embracing generative AI for its convenience, according to Jeff Loucks, executive director of Deloitte’s Center for Technology, Media & Telecommunications. However, the survey found that two-thirds of consumers are concerned they could be fooled or scammed by generative AI content.
- Labeling generative AI-generated content and investing in real-time detection of deepfakes are ways to offer transparency while building trust, according to Loucks.
Dive Insight:
Comfort with generative AI may grow naturally as the technology becomes more commonplace. But companies that actively promote transparency can ease customer concerns.
Gartner also identified transparency as a key consumer desire. Nearly 2 in 5 respondents to an October 2024 survey by the analyst firm said they would be upset if they learned a customer service conversation or chat came from generative AI.
Transparency is essential to overcoming concerns regarding generative AI, according to Nicole Greene, VP analyst at Gartner.
“People still prefer a human connection with customer service,” Greene told CX Dive in an email. “Brands will need to focus on their specific customer needs to ensure that they are delivering value through conversational AI interactions. They should also always offer customers a way to engage directly with a person if that is their preference.”
Transparency is valuable for building trust over data practices as well, according to Loucks. However, only 1 in 5 consumers believe technology providers are very clear about their data privacy and security policies, Deloitte found.
“Companies can address this by offering user-friendly data privacy and security policies that make it easy for consumers to understand what data is collected, used and how it's protected,” Loucks said.
Greene called on companies to create multidisciplinary AI councils with decision makers from throughout the organization. This helps companies build policies that look at AI and data security policies from multiple angles including risk management, responsible use and building trust with employees and customers.
Marketing and CX professionals are being asked to communicate new policies around first party customer data use, and regulators expect leaders to comply even as regulations shift, according to Greene. AI is advancing rapidly, and building consumer trust remains paramount.
“For this reason, it is crucial to establish a strong AI governance foundation based on common AI principles now, rather than in the future,” Greene said.