SEC Warns Individual Actors of Potential Liability for AI-Related Security Risk Disclosure Failures
2 min read
Over the past few months, the Securities and Exchange Commission (the "SEC") has issued several warnings to companies to ensure accurate disclosure of the role and risks of artificial intelligence ("AI") in their businesses.1 In recent remarks at the 2024 Program on Corporate Compliance and Enforcement Spring Conference, SEC Enforcement Director Gurbir S. Grewal extended this warning to individual actors in the context of disclosure failures related to AI as a security threat.2
During his remarks, Director Grewal reiterated the SEC's warning against "AI-washing," or making materially false or misleading representations regarding the use of AI in company statements. He highlighted two recent settlements with investment advisers charged with making false and misleading statements about their AI use. He compared these actions to a similar action in the environmental, social and governance ("ESG") context, noting that the SEC's experience with ESG investing provides an "instructive starting point" for tackling the risks relating to AI.
However, unlike in previous SEC statements on AI-related disclosures, Director Grewal commented on the SEC's approach to individual liability for disclosure failures related to AI as a security threat. He stated that, as with other cybersecurity disclosure failures, the SEC "look[s] at what a person actually knew or should have known; what the person actually did or did not do; and how that measures up to the standards of [the SEC's] statutes, rules, and regulations." He maintained that "folks who operate in good faith and take reasonable steps are unlikely to hear" from the SEC.
Director Grewal's remarks serve to remind individual actors that they also have an obligation to ensure accurate AI-related disclosures. Not only is the SEC monitoring individual-level conduct, but it is also encouraging those who acted in good faith to report disclosure failures and signaling that it may take action against individual actors who fail to do so. Individual actors should know their disclosure obligations and ensure their companies are accurately representing the role and risks of AI in their businesses. White & Case's Global Technology Industry Group draws from various disciplines and a broad range of expertise to help clients address these issues. Should enforcement and regulatory queries arise, the Firm's White Collar/Investigations Practice has practical and current experience managing such inquiries and the Firm's Public Company Advisory Group has experience advising public companies on their SEC disclosure obligations.
1 Navigating New Frontiers in Regulatory Enforcement: the SEC Increases Scrutiny of Artificial Intelligence | White & Case LLP (whitecase.com); Recent Regulatory Announcements Confirm Increased Scrutiny of "AI-Washing" | White & Case LLP (whitecase.com); DOJ Doubles Down on Warnings Against AI Misuse | White & Case LLP (whitecase.com); New Settlements Demonstrate the SEC's Ongoing Efforts to Hold Companies Accountable for AI-Washing | White & Case LLP (whitecase.com).
2 SEC.gov | Remarks at Program on Corporate Compliance and Enforcement Spring Conference 2024.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2024 White & Case LLP