Virginia Gov. Glenn Youngkin vetoed a bill late last month that would have placed guardrails on “high-risk” AI use cases for consumers.
The Virginia High-Risk Artificial Intelligence Developer and Deployer Act, HB 2094, would have created requirements for the development, deployment and use of “high-risk” AI systems and held companies responsible for protecting consumers from its misuse.
But Youngkin vetoed the legislation, saying it created a “burdensome” AI regulatory framework.
“HB 2094’s rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments,” he wrote in his veto.
Despite the failed measure in Virginia, AI regulation is inevitable, experts told CX Dive. Leaders should be aware of the themes in legislation and develop responsible AI governance that prioritizes consumer trust and safety, they said.
The most notable piece of legislation is overseas, however. The European Union passed the AI Act last year, with enforcement deadlines spread out through 2027. The AI Act assigns AI applications to three risk categories — unacceptable, high-risk and limited risk and — bans the first, assigns requirements to the second and applies lighter transparency obligations to the third.
“The EU has regulation that has already been passed, and if you do business in Europe, regardless of where you are, you will have to comply with those requirements,” said Enza Iannopollo, principal analyst at Forrester. “And if GDPR taught us something, they taught us that that becomes sort of the global standard in the absence of other pieces of regulation.”
The state of legislation
The legislative fight in Virginia is indicative of a wider debate on the promises and risks of AI to consumers. As the technology has rapidly progressed, there are two perspectives, according to Mario Matulich, president of Customer Management Practice.
“One is, let's continue to move really fast because it has tremendous potential, and we want to realize and unleash that potential to drive better customer experiences," Matulich said. “On the other side of this: How do we do it responsibly? How do we do it in a way that's not going to negatively impact customers?”
Had the legislation passed, Virginia would have been one of only a few states to have AI consumer protection laws. Utah enacted the Utah Artificial Intelligence Policy Act last spring, which focuses on regulating companies that use generative AI tools with their customers. Colorado also passed the Colorado AI Act, which focuses on governing “high-risk” use cases, last spring. California’s legislation focuses on protecting consumer data generated by AI use.
“To a certain extent, [regulation] is inevitable,” Iannopollo said. “I expect to see more activity in the states. We won't have anything at the federal level, but I do expect some of the states to pass more of these AI regulations.”
Iannopollo says there are two main thrusts to proposed legislation and passed laws: first, providing transparency to consumers about when AI is being used and, second, mitigating risk by providing more guardrails in high-risk situations.
“When it comes to transparency, 80% of respondents in Europe and about 78% in the U.S. told us that they want to know if they are exposed to AI,” she said.
Some legislation categorizes AI use as high, medium or low risk and requires different guardrails each.
High-risk use includes such cases as using AI to profile customers in deciding whether to grant them a mortgage or not or using AI to prioritize who gets access to certain healthcare treatment, Iannopollo said. Low-risk use cases include implementing the technology to transcript and summarize customer service calls or providing updates on where a certain shipment is.
Responsible AI governance
The concerns around the deployment of AI to consumers are valid, experts said.
Brands that have successfully deployed AI have kept trust and safety at the forefront of their efforts, while those that have not had had to go back to the drawing board, Matulich said.
“Some of the largest and most accomplished companies in the world, they'll tell you that when you invest into AI and you invest in various forms of self service and other AI-powered customer experiences, you have to do it from the perspective of driving more value for your customers,” Matulich said. “You have to do it from the perspective of delivering exceptional customer service, first and foremost, not from the perspective of cost savings.”
AI has immense potential to improve personalization, speed and ease of service — the top drivers of customer satisfaction and loyalty, Matulich said. On the other side of things, the risks of AI going wrong include bias and discrimination.
“If there is bias and the customer is on the receiving end of that, not having a possibility to ask, ‘Why is this happening? Can you fix it?’ those are important elements that not only can have a negative impact on the experience, they can have a negative impact on effectively the life of that customer,” Iannopollo said.
There’s also a question about the role of human agents in an AI world. Gartner analysts predict the European Union will pass a “right to talk to a human” mandate by 2028.
Matulich supports such a concept and thinks brands should, too, because it’s good customer experience.
“If my preference is to talk to a human in that given instance, depending on the situation, I should have the ability to do that, and that's just good customer experience. That's just good customer service,” he said. “You’re personalizing experience and giving your customer the opportunity to talk to a human if, in that moment, that's their preference.”
Matulich urges lawmakers to talk to tech experts and companies before drafting legislation and to build advisory boards.
“Work to understand how this technology can be shaped and utilized, and the way we ask our AI developers to develop in a responsible way and put trust and safety at the forefront,” he said. “I think when we think about the laws and the guardrails that are being implemented, we have to kind of take our own advice and do the same there, because we don't want to cramp or curb the innovation at the same time, we want to ensure that our customers are taken care of.”