Các công cụ cần thiết để theo dõi, lập danh mục lỗ hổng phần cứng
Monitoring for cyberattacks is a key component of hardware-based security, but what happens afterward is equally important.
Logging and cataloging identified hardware vulnerabilities to ensure they are not repeated is essential for security. In fact, thousands of weak points have been identified as part of the chip design process, and even posted publicly online. Nevertheless, many companies continue to maintain a code of silence when it comes to exploitable flaws in their designs.
Large language models may offer a solution. They can harvest data about weaknesses, while helping to preserve the reputations of design engineers. But there are two main hurdles. First, making this work requires a vast amount of training data. Second, there is still an air of mistrust when it comes to disclosing hardware vulnerabilities, even where there is sufficient data.
But what makes this approach particularly timely is there is more data being collected than at any time in the past due to new monitoring strategies, and a commensurate increase in the number of attack surfaces due to increasing complexity in heterogeneous designs and systems.
“There are a lot of passive attacks — things you can do without taking the chip off the board, as long as you have physical access to the chip,” said Scott Best, senior technical director for silicon IP product management at Rambus. “A lot of the software attacks can also be done remotely, but there are passive attacks that you can go after. It’s inexpensive, and not a lot of specialty is needed. You can move to semi-invasive attacks, where you start messing with the environment of a chip, and you start causing glitches and injecting lasers.”
Each chip comes with many attack surfaces, Best said. That makes tracking vulnerabilities an essential step in any cybersecurity strategy, to protect both a chip manufacturer’s IP and the data being processed and stored with that IP.
New tech for tracking vulnerabilities
To properly track vulnerabilities, they must first be detected. The techniques for doing this are becoming increasingly comprehensive and are starting to roll out in some high-end chips, said Mike Borza, a scientist specializing in hardware security at Synopsys. One example involves monitoring hot spots on AI server chips for unexpected changes.
“There are centralized points that are really cross-point switches, each with their own configurations,” Borza said. “When you see something that happens counter to the configuration, like the destination port for something originating a given source is wrong, you can record that fact. Those kinds of firewalls are well known in hardware device networking, yet they’re not well known — or at least not widely used — in small device network on chips. But they are used in bigger chips. The bigger the chip, the more it resembles a network that is composed of discrete devices, so more kinds of configuration and monitoring already exist in those devices, and you can tune that for security purposes.”
Borza compared modern methods of detecting hardware vulnerabilities to those that have been long deployed in network monitoring, where endpoints are monitored for indicators of an attack. “What gets more interesting is the idea that you can use these centralized agents to collect statistical data about what’s happening network-wide or across a population of devices, and then use that to spot trends that are related to an attack. You can watch it evolve. You can watch where it’s evolving, what organizations are involved, or what management entities are involved. This gives a very powerful capability to start to understand what’s happening network-wide.”
Despite the value of this type of monitoring, it is not widely deployed due to privacy concerns. But that could change.
“It’s an obvious place where AI tools are going to be deployed, because you can use those to spot things you hadn’t seen before,” Borza said. “You’ll essentially be spotting zero days in the wild, even before you understand what the source of them is, and then have the potential to respond to that automatically. You’ll be able to ship firmware updates or software updates that allow you to patch up vulnerabilities as they’re being detected for the first time and are just starting to be exploited.”
The importance of cataloging
To avoid repeating past mistakes, various companies have begun adopting tools that allow them to search for vulnerabilities as new designs are developed.
“Automatic tools exist, both commercial and proprietary, that can be used to manage and monitor known vulnerabilities, thereby preventing their occurrence in new software or hardware (design language) code,” said Peter Laackmann, senior vice president of security at Infineon. “For example, it is a common practice that publicly accessible GitHub repositories are automatically scanned using such tools, and that a lot of topics have been revealed so far. Many cases are related to the ongoing use of outdated libraries that still allowed attacks, while new improved versions were already available.”
Further, some researchers believe there are better, more comprehensive ways to monitor for known weaknesses. Farimah Farahmandi, assistant professor of hardware security at the University of Florida, pointed to a tool that has yet to be fully deployed that leverages large language models. In a paper presented at the 2024 IEEE International Symposium on Hardware Oriented Security and Trust, Farahmandi and her co-authors detailed a database called Vul-FSM, which they said used an LLM to compile 10,000 vulnerable finite state machine designs.
Along with her fellow academics, Farahmandi said she has “used LLMs for every aspect of security verification.” The models she works with are trained on vulnerabilities that have been detected at every step of design, testing, and verification.
“What LLMs can do is learn about all of the vulnerabilities that have been reported, and you can ask them, what are these hardware security vulnerabilities?” Farahmandi explained. “It can search the web, and it goes through all of the security vulnerabilities have been reported. It can also learn from the Common Vulnerabilities and Exposures and Common Weakness Enumeration databases.”
Still, there is one major hurdle for using LLMs in identifying and tracking vulnerabilities. While thousands of weak spots are documented in the CVE and CWE, many companies are reluctant to expose flaws in their designs, even in the name of the greater good, Farahmandi said. “Most of these companies try to guard their vulnerabilities. They do not share them because they’re afraid that they may lose a lot of customers or revenue. They might have the CVW, which has emerged and comes from many manual efforts, many bug bounties, but not in a systematic way. We are sure that there are a lot of vulnerabilities, and if we have this concept of open sourcing, the whole of security verification will be improved significantly.”
Borza acknowledged the possibilities of using LLMs here, noting there are private versions already in development. “There’s going to be a lot of action around those kinds of tools and products, both open-source and public-domain, as well as commercial products,” he said. “That’s the way of the future.”
As of today, there is little standardization of vulnerability cataloging, outside of some isolated efforts such as a NIST standard.
“A mature company that is performing well actually maintains its own internal history of what it has learned about its products and about how those products were used, and that includes things like security issues that arise so that they cannot repeat those mistakes,” said Dana Neustadter, senior director of product management for security solutions at Synopsys. “It’s a sign of a chaotic or disorganized design team or engineering organization that isn’t doing those kinds of things. Every chip is like a new experience, and it can be birthed without much context for prior history, except to set the general specifications for the part where you know what its functions are, what its performance is going to be. You do see that occasionally, but I would characterize that as an immature organization.”
Borza noted that IEEE began work on P3164 in 2022, a standard that aims to address security concerns in integrated circuit designs. The standard contains a methodology for identifying elements, including input and output ports, “that can influence the behavior of a critical section within the design, and associates known security weaknesses based on the type of design and/or critical section.”
An initial draft is expected within the next year, and will provide guidance on disclosing information about the security of IP products.
“In the 3164 standard there is actually a known vulnerabilities or known weaknesses database,” Borza said. “It’s implemented in a database in most tools, and that knowledge base is used to capture information, like what you get from CWE. CWE is the knowledge base for tooling that is going to use IEEE 3164 as its guiding standard. There’s also the SAE G32 working groups, which are considering hardware, software, and system weaknesses that need to be dealt with during design for automotive and the other kinds of systems that SAE is concerned with. That’s another place where there’s ongoing work to share amongst industry players.”
Even if a lack of open-source data was not a hurdle, Laackmann noted that no tool can be the ‘ultimate’ solution to prevent every potential vulnerability in the future. “The focus of vulnerability management tools is to learn from the past in order to prevent problems in the future. This means that even with the broadest variety of tools available today, the quality of security products strongly depends on the know-how and experience of the manufacturer. A typical high-security product, e.g., certified according to Common Criteria and used in government applications like passports, is designed for many years of operation without the option of software updates. Therefore, the best way of developing long-lasting security products is the ‘security-by-design’ strategy, focused on strengthening the core security architectures even against unknown attacks.”
Because of this, institutional memory can be the most powerful tool that designers have when it comes to tracking hardware vulnerabilities. This requires highly skilled, experienced engineers to realize that complexity in designs can turn out to be the enemy of security in many cases.
“For example, high-complexity products like application processors using differentiated cache systems are often attacked due to their timing behavior, which may leak secrets like private keys. A smart development strategy may prevent unnecessary complexity,” Laackmann said. “Another example involves hardware development tools that may undermine security strategies by code optimization. A tool may not know that a specific design has been intentionally chosen for security reasons, and subsequently removes that security feature. What may be close to an ultimate fix is a combination of highly skilled, highly experienced personnel using the right tools, followed by an in-depth external evaluation by an accredited third party and certification.”
The need to comprehensively track vulnerabilities will only grow more urgent, Synopsys’ Borza said, as the industry increasingly turns towards multi-die systems, due to the added threats that come along with a more complex supply chain. “In the future, we’re talking about chiplets being designed by different manufacturers in different parts of the world. So as a basis for the supply chain security tracking, this security also can be leveraged,” Borza added.
Conclusion
Tracking and cataloging hardware vulnerabilities to not repeat past mistakes has become a significant concern for many hardware companies. New tools are being developed that can aid in detecting possible weaknesses in high performance chips, and those may soon find their way into cheaper products, as well.
However, comprehensive cataloging of known vulnerabilities has run into roadblocks due to most companies keeping an air of secrecy around security flaws found in their products. Though there are public databases, most prefer to maintain a private, in-house catalog that their designers can consult. This has hindered the production of new LLM-based tools that could lead to greater industry-wide security. But experts also warn that no tool can be a cure-all, and the greatest tool for avoiding recurring security vulnerabilities is a strong institutional memory, upheld by subject matter experts equipped with the proper tools.
Related Reading
LLMs Show Promise In Secure IC Design
Large language models can identify and plug vulnerabilities, but they also may open new attack vectors.
Edge Devices Require New Security Approaches
More attack points and more valuable data are driving new approaches and regulations.
Why It’s So Hard To Secure AI Chips
Much of the hardware is the same, but AI systems have unique vulnerabilities that require novel defense strategies.
Devising Security Solutions For Hardware Threats
Keeping up with attackers is proving to be a major challenge with no easy answers; trained security experts are few and far between.