Open source software development does not ensure quick security fixes

In recent years, “open source” software development has become an increasingly popular practice for technology companies. By making source code available to the general public with relaxed (or non-existent) restrictions on how it can be used or modified, open source software can increase customer and developer usage, build customer loyalty, and help develop high-quality products while keeping development costs low.

Many companies also believe that the open source model promotes better quality control, as developers and users are able to “look under the hood” and diagnose code errors, bugs, and needed improvements with ease. In fact, the reputation of open source technology as an error-correction method is so established that it has spurred an internet “law” known as “Linus’s Law”: “Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” In other words, the more eyeballs look at a problem, the more likely an efficient and effective solution will be found. By implementing open source development, the theory goes, companies will maximize the number of eyeballs, and therefore create the safest and best product possible. In contrast, with a closed development system, code is not publicly available, and companies must rely solely on their internal development system to catch and repair coding problems.

But is open source development’s reputation as a bug-buster deserved? A December 2020 report by GitHub, a Microsoft software development subsidiary (and the Internet’s largest host for open source project infrastructure) suggests otherwise. In 2020, over 56 million developers used GitHub, with over 60 million new data repositories being created and over 1.9 billion edits made. Of those, 94% of all GitHub projects relied on open source technology. Noting the significant expansion of open source technology into high-security markets including banking and health care, GitHub decided to look at whether its reputation for error-correction was deserved.

What GitHub found was that, on average, code vulnerabilities in open source programs go undetected for an average of four years before they are located—even with the public’s eyeballs looking for them. While 83% of these errors were not considered malicious, 17% were considered harmful, opening the software to backdoor penetration and the potential for remote computer access, access to plaintext in cryptographic systems, and the ability to access, transfer, or modify privileged information like passwords and user data. In other words, Linus’s Law has a significant blind spot—while public users and developers have the capability of locating errors with open source code, there is no guarantee that the errors will be caught in any sort of timely manner. With nearly 1 in 5 errors considered harmful, companies who rely solely on the public to troubleshoot their code may be opening themselves up to a high, serious, and extended risk of harm.

GitHub’s report provides an important reminder that while the open source community can help patrol for software issues, it is not a replacement for traditional controls. Companies must still ensure there is vigorous internal error testing and proper development protocols utilized both before and after a new product launches. While the development community at large has an interest in promoting the security of open source software, companies must first and foremost take responsibility for their data security themselves.