The Federal Trade Commission’s finalized rule on fake reviews allows the agency to penalize bad actors, but actual enforcement faces challenges.
“The scope of the problem is immense,” said Matt Schwartz, policy analyst with Consumer Reports. “There's some estimates that between 8% and 40% of reviews that folks encounter are fake.”
Not only are fake reviews so numerous they’re hard to track, but bad actors can use generative AI to make fake reviews look real, hampering enforcement. Even with the FTC’s firepower, CX leaders will want to proactively take steps to stop fake reviews on their sites and maintain customer trust.
“The primary dangers of fake reviews are the erosion of consumer trust, regulatory risks, and distorted market perceptions,” Randy Mercer, chief strategy officer of 1WorldSync, the parent company of PowerReviews, said via email.
The stakes are high, experts told CX Dive.
“Ultimately, the key dangers are the erosion of consumer trust, regulatory non-compliance risks, and the distortion of fair market competition — all of which can have severe long-term consequences for brands and retailers,” Mercer said.
Is generative AI making the problem worse?
Though the FTC covers generative AI-produced content in its finalized rule, the technology allows bad actors to more easily create fake reviews — and make them sound real — potentially causing trouble for enforcement.
“A lot of times people’s antenna goes up when they see a review, and maybe it has a lot of misspellings or weird username or something's fishy about it,” Schwartz said. “That heuristic that we use to suss out fake reviews might go away as ChatGPT and other generative AI products make it easier to make a fully convincing paragraph.”
But content is only one way businesses can determine whether a review is fake. Trustpilot, which aggregates user reviews of companies and websites, looks at not only content, but how the review has been added to the site to determine its legitimacy.
“What steps is a reviewer going through in order to submit that review?” Anoop Joshi, chief trust officer of Trustpilot said. “Sure, someone can create hundreds of review content extracts in ChatGPT, but ChatGPT does not help you get the review content from ChatGPT and onto the platform.”
Trustpilot determines whether the review is fake at this stage by examining device identifiers, IP addresses and relationships between reviews.
“So whilst content creation has become much easier, the sort of moving that content onto review platforms, getting that into other places, is not being facilitated by generative AI today,” Joshi said.
What else can businesses do?
Online platforms have enlisted a number of tactics to limit fake reviews.
Amazon and Trustpilot have taken legal action against companies selling fake reviews. Yelp “downplays” reviews it believes to be fake.
“Rather than outright delete them, they show them as not recommended,” Jabr said. “Others have added a helpfulness vote to crowdsource the legitimize reviews.”
Mercer encourages companies to preserve review integrity by implementing a comprehensive content moderation process, ensuring full transparency around review sources and proactively addressing the new FTC guidelines.
Some businesses have also begun working together to tackle the problem. The Coalition for Trusted Reviews, created last fall with Trustpilot, Amazon, Booking.com, Expedia Group, Glassdoor and Tripadvisor, is working as a unified industry voice with regulators across the UK, Europe and U.S.
The coalition, Joshi says, is an opportunity to tackle some of the common fake review tactics, define best practices and share information, instead of fighting fake reviews in silos.
“What you often find is Trustpilot might be taking action against a review seller that we've identified,” he said. “But those review sellers may also be trying to sell reviews onto other platforms in the coalition as well. So we're pooling our resources in that regard so that we can, together at an industry level, start to tackle some of these issues.”