Cyber and AI risk disclosures: A director’s take
Any public company board member will be familiar with the Risk Factors (also known by other names such as Important Factors that may affect future results), a section of 10Ks and information statements.
These sections, even with the shortened format can occupy ten or more pages. Some companies prefer to use very broad risk disclosures while others are more specific. They serve to disclose to investors what risks the company’s future results could be exposed to, and they also give the board and management more liability protection should the company suffer a loss as a result of a disclosed risk.
Both cyber and AI are relatively new additions to the risk landscape, and they both have characteristics that distinguish them from more conventional risks. For instance, a company hit by cyber intrusion may not know who the perpetrator is, what was the full extent of the breach, how long it has been active or if it’s been contained. An AI incident has similar unknowns, but there are more questions about the source and quality of the information used in the process.
Cyber, AI and the materiality of breaches
On December 15, 2023, the SEC introduced a requirement for companies to file an 8K within four days after they decide a breach is “material.” Around this time, Erik Gerding, Director of the SEC’s Division of Corporation Finance, said in a statement (representing his personal views), “ultimately it is the company’s responsibility to make a materiality determination based on consideration of all relevant facts and circumstances.”
In its December 2023 statement, the SEC made it clear that there are no special materiality tests for cyber intrusions. The challenge is to decide what is material, given the sophistication of today’s cyber exploits. This sophisticated malware will have a delivery system, a payload and an execution, so the company may recognize an incident but may not be able to discover what the implications are.
Another feature of these more sophisticated exploits is their ability to avoid detection by creating modified versions of themselves and altering their physical makeup during each infection by mutating their encryption codes. These are polymorphic viruses and detecting them, containing them and deciding if they are material or not could be a challenge for the board and Chief Information Security Officer (CISO).
Of course, the introduction and use of AI is a huge helper in productivity. Unfortunately, this productivity helps criminals and bad actors as well as it helps others, so the danger of polymorphic and other viruses is magnified. To complicate matters, a company also needs to be aware if any of its suppliers, vendors, contractors or customers have had a material incident and if that incident is also material to the company.
One way to protect companies and their boards from unknowingly failing the materiality test and even before a breach may be to use a general disclosure statement on cybersecurity risks in the Risk Factors Section of their 10K or the relevant part of the information statement.
Virtually every company discloses its exposure to data loss and security breaches in greater or lesser detail, but they generally do not discuss the unique characteristics of cyber risk exposure. The SEC’s materiality test may push companies to make these disclosures more granular. For instance, a separate but specific cyber risk exposure disclosure could contain statements that while the company, at this stage, is not aware of any cyber breaches, attacks, malicious software and hardware, it could be present on any of the company’s systems, (and those of its suppliers, customers, vendors and contractors), that when discovered could cause the company to suffer a material data breach. The disclosure could also describe that while it and its service providers perform regular audits of their information systems security, it cannot monitor the containment or spread of malware within these service providers’ systems.
Moving forward with ever-increasing cyber and AI risks
It is still very early to see what companies will be reporting in their 2023 10Ks about AI incident risk. Criminals are using AI to deep fake videos or recordings, a sort of video spoofing. There is a very real risk that this could happen for example, a deep-faked CEO reporting a huge earnings loss that could have significant share price implications. AI should have its own specific risk-disclosure warning, amongst other things, of deep fake attacks, misuse of data by AI and the potential lack of AI review by third-party service providers.
In addition, the new required SEC disclosure on cybersecurity risk management strategy in the 10K can also go into detail on such safeguards as zero-trust architecture, multi-cloud service provider review, encryption and multi-factor authentication. The aim is for the company to give as much general disclosure information to warn the public against unforeseen circumstances that may arise before the company can detect them as material during a cyber or AI incident.
To stay on top of the developments that are pertinent to their business — whether they are ESG, cyber, AI, compensation, corporate governance, or other — directors must continue to educate themselves. Read and read some more to understand the nuances of the risks that matter most.