FTC Settles With Security Firm Over AI Claims Under Agency’s Compliance Program
The Federal Trade Commission has reached a settlement with a security screening device maker as part of the FTC’s ‘Operation AI Comply” program to crack down on companies’ allegedly deceptive claims about the artificial intelligence capabilities of their products.
But the proposed settlement with Evolv Technologies on Tuesday may be the FTC’s most controversial resolution yet, by giving some customers the ability to unilaterally cancel their contracts.
While all five FTC commissioners voted to file the complaint, Republican Commissioner Melissa Holyoak dissented on the FTC’s proposed notice to customers giving them the right to cancel their Evolv contracts.
“The notice, in effect, creates a cancellation right in Evolv’s contracts where one does not otherwise exist,” wrote Holyoak. “The majority includes this provision in the proposed order even through the commission has no authority under Section 13 (b) of the FTC Act to change contractual terms—whether through recission or reformation of contract remedies—or seek refund of monies.”
Holyoak and fellow FTC Commissioner Andrew Ferguson, also a Republican, are said to be among those President-elect Donald Trump is considering to replace FTC Chair Lina Khan, a Democrat.
Waltham, Massachusetts-based Evolv’s security scanners are used in venues such as hospitals, stadiums and schools. Half of its Express scanners are in more than 800 schools in 40 states, FTC said.
The devices use electromagnetic fields to detect characteristics of objects in such a way that items in coats or backpacks may not have to be removed, according to the company.
The commission complaint stated the AI-powered scanners are promoted by the company as an alternative to traditional metal scanners.
“Evolv deceptively advertised that its (scanners) would detect all weapons and made misleading claims that its use of artificial intelligence makes its screening systems more accurate, efficient and cost-effect than traditional metal detectors,” the FTC stated.
The complaint alleges Evolv’s scanners failed in numerous instances to detect weapons, including a seven-inch knife brought into a school in 2022 used to stab a student, yet falsely flagged harmless personal items.
The issues forced some schools to add additional security measures, making Evolv’s system “more like traditional lower-cost metal detectors,” the FTC added.
In-house counsel for companies providing or using AI tools need to evaluate past, current and planned product advertisements “to confirm the veracity of any claims about AI-powered products and avoid overreaching,” said Alex Brown, a partner at Alston & Bird LLP, in Altanta.
“The Evolv Technologies enforcement action shows that the FTC’s Operation AI Comply will continue to crack down on companies that make what the FTC deems unfair or deceptive statements about the capabilities of AI-powered products or services,” Brown added, “especially if the product or service overstates the ability of AI to solve real world problems.”
Evolv, which neither admitted nor denied the allegations in the FTC’s complaint, did not immediately respond Tuesday to a request for comment.
The proposed settlement would bar the company from making misrepresentations about the ability of its scanners to detect weapons or to ignore harmless items, particularly in comparision to traditional metal detectors.
Any claims about speed of scanning or labor costs relative to traditional detectors would also be prohibited, as well as any “material aspect of its performance, including the use of algorithms, artificial intelligence or automated systems or tools.”
Holyoak supported that part of the complaint.
“The commission’s action today is important to stop deception in the security screening market for schools, and protects not only Evolv’s school customers but also the children and teens whose safety depends in part on the efficacy of security screening systems,” Holyoak wrote.
However, she objected to the legality of a portion of the settlement in which Evolv must notify K-12 school customers that they can opt to cancel contracts signed between April 1, 2002, and June 30, 2023.
Citing the U.S. Supreme Court ruling in AMG Capital v FTC, Holyoak stated the commission lacks authority under Section 13 (b) “to obtain redress or disgorgement for garden variety Section 5 deception and unfairness counts.”
Thus, “relief cannot be expanded to unilaterally change contractual terms or suspend future contractual payments,” she wrote.
He stated that Section 13 of the FTC Act provides that the commission may obtain from a federal district court a permanent injunction for certain violations
The part of the order concerning contracts, “merely forbids Evolv from engaging in future conduct under a contract induced by material representations,” Ferguson wrote. “This purely prospective, prohibitory relief sounds in ‘injunction’ rather than ‘recission.'”
Ferguson’s alternative take from that of fellow Republican Holyoak may be instructive.
“Companies and their counsel should not rely on a decline in federal enforcement under the Trump Administration to justify pushing the envelop in this area,” said Alston’s Brown.
Brown co-authored a client note last month that provides precautionary considerations for companies using AI.
“AI is likely to remain an enforcement priority even after the FTC changes leadership next year,” Brown said. “State attorneys general are similarly enforcing state unfair and deceptive statues against companies in making overly aggressive claims about AI, including the first of its kind settlement reached by the Texas AG in August.”
He was referring to the first state attorney general settlement with a company accused of deceptively marketing generative AI software, in which Texas Attorney General Ken Paxton took aim at Irving, Texas-based Pieces Technologies.
Pieces claimed that it AI software compiles health care worker notes, charts and other data. It then outputs lightning-quick a highly detailed patient summary—potentially saving doctors and nurses considerable time and effort.
Texas regulators alleged the potential for “hallucinations” in the software could endanger patients and that clinicians may have a false sense of security.
A hallucination refers to content not based on real data but that produced by a machine learning model’s creative interpretation of its training data.
The settlement with Texas prohibits the company from misrepresentations about the accuracy, reliability or efficacy of its products. Among other terms, Pieces agreed to clearly and conspicuously disclose known harmful or potentially harmful uses of its products and disclose the data and models used to train its AI products.
Pieces said it “vigorously denies any wrongdoing and believes strongly that it has accurately set forth its hallucination rate, which was the sole focus of the (action).”
link