In September 2022, the European Commission presented a proposal for a new Cyber Resilience Act (CRA) to protect consumers and businesses from vulnerable IT products. It is the latest in a set of cybersecurity regulations including the EU Cybersecurity Act and the NIS2 Directive. Its goals are commendable, but a careful reading is required to understand its consequences and identify potential drawbacks. As usual, the devil is in the details.
For the first time, a regulation seems to address all security aspects in a consistent way
The Cyber Resilience Act in a nutshell
Understanding a huge regulation is always a challenge and requires knowledge of the regulator’s intent. For those who haven’t followed the preliminary discussions, the fact sheet published along with the proposal gives a fair summary of the European Commission’s objectives.
It proposes practical measures to improve security, such as requiring security in all phases of a product lifecycle, making configuration instructions clear and understandable, providing security patches, and reporting exploited vulnerabilities. Another positive aspect is the intent to leverage the EU Cybersecurity Act’s existing certification schemes to immediate enable third-party security assessments.
From an enforcement perspective, EU regulations don’t wait for Member States to enact them. They are immediately applicable and enforceable, as GDPR has proved. Like GDPR, the CRA can impose huge penalties, meaning it’s not a regulation that CEOs can afford to ignore!
It is also a good thing to have such regulation at European level, with an authority (ENISA) already in-place whose role is strengthened by the regulation. For the industry, it is reassuring to have a single authority for such a large market.
The obligations for manufacturers are detailed in Article 10. They can be summarized as:
- Meet the essential requirements set out in Section 1 of Annex I
- Assess the cybersecurity risks associated with their product
- Ensure their supply chain does not compromise the security of their product
- Document the security aspects of their product
- Maintain the security of their product during its lifetime.
These are all very sane principles, and no one could call these objectives into question. What’s revolutionary about the CRA is its very wide scope. In it, the legal definition of a “product with digital elements” includes not only the product itself, but also the back-end software and hardware with which the product is intended to work. From a legal and security perspective, this makes sense, but in practice it may have unforeseen consequences.
The section which may have more impact on manufacturers is Article 24. It states that “the manufacturer shall perform a conformity assessment of the product… and (of) the processes put in place by the manufacturer to determine whether the essential requirements set out in Annex I are met.”
This is a very positive approach, in that it tries to improve not only the security of a product, but also the processes which will ensure security during the product’s lifetime. Taking these processes into account is key, given the rapid evolution of today’s technologies. Yet, limiting security only to processes is insufficient to ensure complete protection. For the first time, a regulation seems to address both sides of the coin in a consistent manner.
Also interesting is the notion of defining a level-based approach for critical and highly-critical products. Indeed, some products’ security may have a direct impact on other systems, making it important that they meet higher security assurance standards.
The CRA denies authority to the market surveillance authority it creates
Diving into some key details
Any company wishing to sell products in the European Market must read Annex I carefully, because it encompasses all the new requirements that any operator (manufacturer, distributor or importer) will have to comply with. Annex I is pretty straightforward, divided in two parts: one for the products, one for the vulnerability handling process.
The one sentence which will undoubtedly raise concern requires a product to be “delivered without any known exploitable vulnerabilities” when put on the market. This immediately raises the question of exactly what a “known exploitable vulnerability” is — and unfortunately, no legal definition appears in the CRA.
In fact, this term only appears in Annex I, and the introductory memorandum of the legal text, but not in the text itself. The only related definition refers to an “actively exploited vulnerability,” which is defined in Article 3 (39) as “a vulnerability for which there is reliable evidence that execution of malicious code was performed by an actor on a system without permission of the system owner.”
Is a known exploitable vulnerability a vulnerability that is known to have been actively exploited? Given the legal impact of an exploitable vulnerability (which forbids a product from being put on the market), it is therefore key to have a clear definition of it.
Why? Because virtually every product has some known exploitable vulnerabilities under certain definitions.
Let’s take the example of CPUs. “General purpose microprocessors” are listed among the most critical products under the CRA (see Annex III, Class II). This makes sense, given the wide usage of CPUs in our IT systems. But it is also true that, by design, any CPU is prone to so-called “side channel” attacks, which can be exploited if certain conditions are met.
In simple terms, modern CPUs try to optimize their efficiency. To do so, they calibrate their performance (cache access, power consumption, etc.) based on program data submitted as input. Thus, modern CPUs — by their very nature — leak information about the programs being run on them. If a malevolent program is running in parallel on the same CPU, this information leak can be amplified to compromise sensitive information from a program.
In terms of risk assessment, everything can be documented, and most CPU vendors have already analyzed and described these threats. However, laboratory testing has shown that it is possible for these small leaks to be exploited to extract cryptographic keys, but this is usually demonstrated on specially crafted programs.
So, are all CPUs condemned to be banned from the EU market due to the CRA? This question may be not as silly as it sounds, since Article 43 of the project raises another concern.
At first reading, this article seems to make sense. But from a legal perspective, it doesn’t leave a lot of freedom for the market surveillance authority. According to Article 43.1:
Where the market surveillance authority of a Member State has sufficient reasons to consider that a product… presents a significant cybersecurity risk, it shall carry out an evaluation of the product. Where… the market surveillance authority finds that the product with digital elements does not comply with the requirements laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the product into compliance with those requirements, to withdraw it from the market, or to recall it…
How will the “significant cybersecurity risk” be assessed? Does the existence of an exploitable vulnerability qualify? What about the CPU case we just outlined? Will a market surveillance authority risk ignoring a vulnerability presented as exploitable?
Let’s consider another recent incident, in which the open-source component Log4j put thousands of products at risk. How will the market surveillance authority perform an evaluation of all the impacted products? If this evaluation reveals any non-compliance, it is required to issue corrective actions to the relevant operator? The current wording doesn’t leave room for interpretation. In a sense, it denies the market surveillance authority its authority.
Another problematic aspect of the CRA is an unclear definition of the period during which security patches are to be delivered. Item 12 of Article 10 in the CRA obligates manufacturers to maintain full conformity with Annex I for five years or the expected lifetime of the product, whichever is shorter.
This obligation may lead to a domino effect. Let’s take the example of CPUs again. A CPU vendor that puts its product on the market must maintain security monitoring for five years, despite the CPU’s longer expected lifetime. Now assume that a server maker like Atos decides to embed this CPU in its server, but due to integration constraints, the server is put on the market one year later. Thus, the CPU in that server now only has four years of guaranteed security support remaining. Perhaps you also need some embedded GPUs in your server. It’s possible that these might have three years left in terms of maintenance.
Multiply that complexity with every embedded component and it becomes likely that after a year or two, one component or another will become obsolete. As a product vendor, you can find a functional replacement for the obsolete component, but if you no longer have support from the component vendor, is your vulnerability process still compliant?
Will you have to swap out every obsolete component in the servers you have already sold? This would change completely the economics of the manufacturing industry.
We have reached dangerous ground, where the regulator gives value to the bug, and denies value to the patch.
Where the CRA could backfire
Despite these shortcomings, in my opinion the worst impacts are elsewhere in this regulation. It’s not from the legal part of the text (the Articles) but from Annex I. The last point of this annex refers to an operator’s obligation to “ensure that, where security patches or updates are available to address identified security issues, they are disseminated without delay and free of charge, accompanied by advisory messages providing users with the relevant information, including on potential action to be taken.”
The other part of the vicious circle comes from item 36 in the preamble to the CRA, which promotes bug bounties. I have already expressed concern about this, but I think we have now reached dangerous ground. Here, the regulator not only gives value to the bug and the vulnerability, but also denies value to the security patch.
This approach could be detrimental to the security of many products, because open-source developments are often financed by their support, and especially their security support. For example, the widely-used OpenSSL library is supported by an organization whose revenues come partly from providing extended support on previous versions of its library.
Companies pay for this support because they want to have security fixes guaranteed for a long period, and because the stability of their products is not compatible with the never-ending development of new features. On the other end, making new versions available free-of-charge allows companies to rapidly implement new features, test them, and propose them in their new products.
Enforcing an obligation to work for free seems counterintuitive to me. Why incentivize bug finding but not security patching? If you pay for security patches, it’s more likely that you will apply them!
Given the legal risk and the amount of the penalties, some hackers could also be tempted to ransom companies by threatening to disclose their vulnerabilities. If you risk €150 million in penalties, you may opt to pay €1 million to prevent a public disclosure. Unless I’m wrong, the CRA puts the obligation on operators to disclose the vulnerability of their products to ENISA. The hackers are under no obligation to disclose vulnerabilities to the operators, to the market surveillance authorities, or to ENISA.
In my opinion, this obligation should be enforced free-of-charge — not the security patches, which require a lot of cost and effort to properly develop and validate, and which can be embedded in larger upgrade packages. Again, the purpose is to mitigate cybersecurity risks, not to obtain security for free, because this won’t happen.
Conclusion
The new Cyber Resilience Act is a step in the right direction to improve the security of the digital products we all rely on. However, I feel strongly that a few of the requirements must be reconsidered before it comes into force, in order to avoid some of the negative consequences outlined above.
This guest blog is published with the kind permission of Atos and originally appeared here.
If you want to learn more about the CRA and its impact, join our Roundtable on 17. April, where you can hear not only Florent Chabaud speak, but also listen to the position of the European Commission, the US NIST, standardization organizations, and industry. For agenda and registration, please go to https://iotac.eu/iot-day-roundtable-2023/!