There is a lot to build a secure product, but managing the vulnerabilities in your code base (ie software components used) is one of the most important things. Especially if you use a lot of third-party components: (open source) libraries, maybe a Linux Kernel and some hardware parts. Vulnerabilities are everywhere and obviously the security of your product depends on the security of these components. One widely used way to monitor the used components is through the CVE - Common Vulnerabilities and Exposures - database. When a CVE for a component you use comes up, you update your firmware, test it and ship the update to the customer.
This post is about edge cases when your product might become vulnerable, even when you have a fully running CVE Vulnerability Mangement System established. (CVE VM)
Disclaimer 1 - CVE: Best we have
CVE is one of these open source source / community / distributed model systems, which do work and seem to scale pretty well. It also has enough market penetration so that it is a de-facto standard to ID a vulnerability. That is nice, because when you work with vendors which do not issue CVEs, you run into problems like:
- Is that one vulnerability? Or many? CVE has counting rules
- Which of the $n vulnerabilities are you even talking about? CVE has an ID.
- CVE forces vendors to publish a minimum amout of vulnerability details to the database. There are vendors which would prefer to publish advisoreis on a login protected site. CVE is public.
- CVE forces vendors to publish a minimum amout of vulnerability details to the database using JSON. CVE is maschine readable.
Though CVE might not be the optimal solution, it is the best we have.
Disclaimer 2 - Systems break
Neither am I deep into CVE VM nor is it a part of my day-to-day work ($defaultNotWorkDisclaimer). Nonetheless my focus shifed due to work enough, leading to situations where my brain triggered on some tweets or blog posts: Ah, that is where CVE VM breaks.
So, during the last year, each time my brain was triggered, I saved the link and collected the following (edge) cases when CVE VM might fail. Since I never saw a similar overview, I decided to quickly write them down.
Vendor does not issue CVEs
The most obivous reason where your CVE VM might fail, is simply when the vendor or component you use does not issue CVEs.
That might have multiple reasons: Vendors which simply feel publishing advisories on a (login protected) portal, buried in a PDF, is good practise. Sadly, this is more common than you might think.
Other reasons not to issue CVEs might be the necessary management overhead (which is small) or lazyness. Maybe the fact that Github is a CNA will at least decrease the amout of open source components, which do not assign CVEs. There are many variations of this quirk:
- Vendors which do not issue CVE at all
- Vendors which do issue CVEs sometimes (eg. when they are forced by someone external)
- Vendors which most ofen do issue, but sometimes forget?
- ARM fixing quietly speculative execution on ERET on Linux upstream. OpenBSD had to rediscover the issue.
- Varnish published a new version with security fix in Oktober. A CVE is "TBD" since then?
- One of many issues in a open source project on GitHub without CVE.
A bug is a bug, is a bug
Linux might be the most popular project which falls into this category. The Linux Security documentation is pretty clear: The security team does not normally assign CVEs, nor do we require them for reports or fixes, as this can needlessly complicate the process and may delay the bug handling.
Fraunhofer AISEC put numbers onto what we all know: Stack Overflow is a critical part software engineering:
15.4% of the 1.3 million Android applications we analyzed, contained security-related code snippets from Stack Overflow. Out of these 97.9% contain at least one insecure code snippet. Stack Overflow Considered Harmful? The Impact of Copy & Paste on Android Application Security
How do you get notified about a vulnerability in your Stack Overflow code? :)
Fuzzing is everywhere. Syzbot is fuzzing the Linux kernel and it's modules. The OSS-Fuzz project found, as of August 2019, over 14,000 bugs in 200 open source projects. If you dig through the bug trackers and to the open source projects, there are rarely CVEs assigned.
Obviously nobody wants to analyze the bugs to find out whether they are security relevant or not. In fact this is not always that obvious. Seeing the sheer number of bugs, I feel like a dedicated attacker could easily find a vulnerability to exploit, which never was categorized as security critical. Let alone got a CVE.
Triage is hard: Wrong / Missing vulnerable components
This is mostly a problem of vulnerabilty triage. To locate the bug is sometimes really hard and the location of the bug is often not very obvious, from the researchers perspective (library of upstream code?). For projects which do not have a CNA, everybody can get CVEs at MITRE. At sometimes leads to situations like when VLC got a 9.8, but the vulnerability was really in libebml (which did not issue a CVE). In this case the error was corrected, because VLC started a little storm in a teacup and got some Twitter attention (after they faced this issue several times already).
Then there are situations where things get very complex and not all affected versions or products are mentioned in the CVE and advisory:
As part of our research effort, we investigated 115 distinct releases for Apache Struts and correlated these releases against the 57 existing Apache Struts Security Advisories covering 64 vulnerabilities. We found that 24 Security Advisories incorrectly stated the impacted versions for the vulnerabilities contained within the correlated advisory. In total, 61 additional unique versions of Struts were identified as being impacted by at least one previously disclosed vulnerability.*
Supply Chain attacks
What about supply chain attacks? There is a nice collection on these attacks on GitHub. Is your component issuing a new version, when a supply chain attack happens or is it only going to re-release the same version with a valid hash or signature? A CVE is not going to be issued - to the best of my knowledge. Also many dependency management tools like pip for python fix a dependency to a specific version. Without a CVE you never will be notified that you should bump the version.
In 2012 Travis Ormandy found a critical arbitary memory write vulnerability in the Sophos Security Appliance. Then, 5 years later, Google Project Zero rediscovered that issue as an issue in unrar. It seems like Sophos patched their unrar fork but did not notify upstream about the vulnerability and fix.
mainline, release, stable, forks
There are some projects which have a mainline release and a stable release cycle. Linux is propably the most prominent example of this. Linux-Stable receives about 5% of patches from upstream. There are some rules the community follows, which patches are included in Linux-Stable and which not.
Then there are forks, for example Android. Google is working hard to bring Android closer to Linux LTS, especially after a critical Use after Free bug, which was only backported to some branches. ("cherry picking" patches to branches is common). The farer away you are from upstream, the more likely you will lose important patches.
I don't know of any examples, where the LTS Kernel was vulnerable because they missed a patch. Obvisouly, you can't be sure about that.
Even when compiling your code, things can go wrong. There are compiler optimizations, which can lead to serious security issues. For example a privilege escalation in the tun/tap driver in Linux, where a NULL pointer check was optimized out·.
Open Source projects often fix security vulnerabilities in their public repo and the fixes are then shipped with the next release. Even if a CVE is assigned, the vulnerability might be exploited during the timegap between the commit and the release.
There are some examples collected by Samuel Groß from P0:
Furthermore, at least for WebKit, it is often possible to extract details of a vulnerability from the public source code repository before the fix has been shipped to users. CVE-2019-8518 can be used to highlight this problem (as can many other recent vulnerabilities). The vulnerability was publicly fixed in WebKit HEAD on Feb 9 2019 with commit 4a23c92e6883. This commit contains a testcase that triggers the issue and causes an out-of-bounds access into a JSArray - a scenario that is usually easy to exploit. However, the fix only shipped to users with the release of iOS 12.2 on March 25 2019, roughly one and a half months after details about the vulnerability were public.
This exploit targets CVE-2017-2505 which was originally reported by lokihardt as Project Zero issue 1137 and fixed in WebKit HEAD with commit 4a23c92e6883 on Mar 11th 2017. The fix was then shipped to users with the release of iOS 10.3.2 on May 15th 2017, over two months later.
Altogether there are 7 examples in the blog post!
Exodus Intel show cased that exploiting Chrome using Patch gapping is feasable:
Do you remember when Tibetan groups where targeted recently? They used a patch gapped bug to exploit their targets (maybe the ones linked above from Exodus).
POISON CARP appears to have used Android browser exploits from a variety of sources. In one case, POISON CARP used a working exploit publicly released by Exodus Intelligence for a Google Chrome bug that was fixed in source, but whose patch had not yet been distributed to Chrome users.