Texas Attorney General, Ken Paxton, (the Attorney General) announced on September 19, 2024, a landmark settlement with Pieces Technologies (Pieces) over allegations of false and misleading claims about the capabilities of its generative artificial intelligence (AI) products. Pieces, a Dallas-based health care technology company develops, markets, and deploys AI products and services that assist health care providers with summarizing, charting, and drafting clinical notes for electronic medical records. Its products were reportedly used by at least four hospitals in Texas. This "first-of-its-kind" settlement addresses allegedly deceptive and misleading statements made by a health care technology company regarding the accuracy of its generative AI products.
Key Details of the Settlement:
- False and Misleading Claims: The Attorney General's office accused Pieces of making exaggerated claims about the performance and reliability of its AI technology. The Attorney General alleged that to advertise its technology and services, Pieces created a series of metrics and benchmarks suggesting that its generative AI products were highly accurate. Pieces claimed an error or "hallucination" rate of "<.001%," or "<1 per 100,000." Those statements were alleged to violate the Texas Deceptive Trade Practices – Consumer Protection Act.
- Settlement Terms: As part of the settlement, Pieces agreed to:
- Cease making any false or misleading statements about its AI products;
- Provide clear and conspicuous disclosures regarding measurements describing the outputs of its generative AI products that include the meaning or definition of the measurement, and the method, procedure, or any other process used by it to calculate the measurement; and
- Provide clear and conspicuous disclosures to their customers regarding "known or reasonably knowable harmful or potentially harmful uses or misuses of its products and services." The documentation to be included in such disclosures are: (i) the type of data and/or models used to train its products or services; (ii) a detailed explanation of the intended purpose and use of its products and services, as well as any training or documentation needed to facilitate proper use of the products and services; (iii) any known, or reasonably knowable, limitations of its products or services, including risks to patients and health care providers from the use of the products or services; (iv) any known, or reasonably knowable, misuses of a product or service that could result in inaccurate outputs or increase the risk of harm to individuals; and (v) for each product or service, all documentation reasonably necessary for a user to understand the nature and purpose of an output generated by a product or service, monitor for patterns of inaccuracy, and reasonably avoid misuse of the product or service.
- The AG's Position and Warning: Attorney General Paxton emphasized the importance of transparency and accuracy in AI products, especially those used in health care. He stated: "AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible…Hospitals and other health care entities must consider whether AI products are appropriate and train their employees accordingly".
Implications for Technology Companies and Businesses Procuring AI Technologies:
Technology Companies:
- This settlement serves as a critical reminder for technology companies to ensure that their marketing and promotional materials accurately reflect the capabilities and limitations of their products and services.
- AI developers must consider the information provided to customers including instructions and documentation regarding the product to ensure proper use. Indeed, such disclosures are required under certain regulatory regimes, including the Colorado AI Act and the EU AI Act (which has an extraterritorial scope similar to the EU's General Data Protection Regulation).
- Another significant takeaway from the AG's settlement framework is the obligation to disclose not only known but reasonably knowable limitations of the products' outputs, as well as "all documentation reasonably necessary" for customers to understand the output.
- Companies must prioritize transparency and accuracy in marketing materials to avoid regulatory enforcement proceedings and investigations, and importantly, to maintain trust with clients and the public. This may include partnering with third-party auditors to ensure data regarding error rates or "hallucinations" are accurate.
Businesses Procuring AI Technologies:
- The settlement also provides important takeaways for businesses who are procuring advanced technology for use in "high-risk" areas including health care, education, labor and employment, financial services, and insurance, among others.
- Developing vendor management protocols and a "checklist" of due diligence questions for procurement, among other risk management strategies, are crucial.
- Ensuring that appropriate training and education is provided for users to understand the proper use and limitations of AI technology is essential.
- Organizations should ensure robust contractual provisions when procuring third-party technology products. Those include obligations for ongoing monitoring and remedies relating to inaccuracies in outputs, drift, or other potential output errors.
Conclusions
While the Attorney General announced this as a first-of-its-kind settlement agreement with a generative AI company, it is likely the first of many enforcement actions by state government agencies based on existing consumer protection laws. The FTC has been very active in its oversight of AI technology companies, and the DOJ has opened investigations into potential fraud and abuse practices related to AI technologies embedded in electronic medical records. These government regulatory actions underscore the importance of implementing a robust AI governance framework whether you are a developer or user of AI tools.
For more information on the impact of this ruling, or if you have any questions about the ruling, please contact Alexandra P. Moylan, CIPP/US, AIGP or another member of Baker Donelson's Artificial Intelligence Team.