Several weeks ago, the Linux community was rocked by the disturbing news that University of Minnesota researchers had developed (but, as it turned out, not fully executed) a method for introducing what they called “hypocrite commits” to the Linux kernel — the idea being to distribute hard-to-detect behaviors, meaningless in themselves, that could later be aligned by attackers to manifest vulnerabilities.
This was quickly followed by the — in some senses, equally disturbing — announcement that the university had been banned, at least temporarily, from contributing to kernel development. A public apology from the researchers followed.
Though exploit development and disclosure is often messy, running technically complex “red team” programs against the world’s biggest and most important open-source project feels a little extra. It’s hard to imagine researchers and institutions so naive or derelict as not to understand the potentially huge blast radius of such behavior.
Equally certain, maintainers and project governance are duty bound to enforce policy and avoid having their time wasted. Common sense suggests (and users demand) they strive to produce kernel releases that don’t contain exploits. But killing the messenger seems to miss at least some of the point — that this was research rather than pure malice, and that it casts light on a kind of software (and organizational) vulnerability that begs for technical and systemic mitigation.
Projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models.
I think the “hypocrite commits” contretemps is symptomatic, on every side, of related trends that threaten the entire extended open-source ecosystem and its users. That ecosystem has long wrestled with problems of scale, complexity and free and open-source software’s (FOSS) increasingly critical importance to every kind of human undertaking. Let’s look at that complex of problems:
- The biggest open-source projects now present big targets.
- Their complexity and pace have grown beyond the scale where traditional “commons” approaches or even more evolved governance models can cope.
- They are evolving to commodify each other. For example, it’s becoming increasingly hard to state, categorically, whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. For-profit organizations have taken note of this and have begun reorganizing around “full-stack” portfolios and narratives.
- In so doing, some for-profit organizations have begun distorting traditional patterns of FOSS participation. Many experiments are underway. Meanwhile, funding, headcount commitments to FOSS and other metrics seem in decline.
- OSS projects and ecosystems are adapting in diverse ways, sometimes making it difficult for for-profit organizations to feel at home or see benefit from participation.
Meanwhile, the threat landscape keeps evolving:
- Attackers are bigger, smarter, faster and more patient, leading to long games, supply-chain subversion and so on.
- Attacks are more financially, economically and politically profitable than ever.
- Users are more vulnerable, exposed to more vectors than ever before.
- The increasing use of public clouds creates new layers of technical and organizational monocultures that may enable and justify attacks.
- Complex commercial off-the-shelf (COTS) solutions assembled partly or wholly from open-source software create elaborate attack surfaces whose components (and interactions) are accessible and well understood by bad actors.
- Software componentization enables new kinds of supply-chain attacks.
- Meanwhile, all this is happening as organizations seek to shed nonstrategic expertise, shift capital expenditures to operating expenses and evolve to depend on cloud vendors and other entities to do the hard work of security.
The net result is that projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models. In the specific case we’re examining here, the researchers were able to target candidate incursion sites with relatively low effort (using static analysis tools to assess units of code already identified as requiring contributor attention), propose “fixes” informally via email, and leverage many factors, including their own established reputation as reliable and frequent contributors, to bring exploit code to the verge of being committed.
This was a serious betrayal, effectively by “insiders” of a trust system that’s historically worked very well to produce robust and secure kernel releases. The abuse of trust itself changes the game, and the implied follow-on requirement — to bolster mutual human trust with systematic mitigations — looms large.
But how do you contend with threats like this? Formal verification is effectively impossible in most cases. Static analysis may not reveal cleverly engineered incursions. Project paces must be maintained (there are known bugs to fix, after all). And the threat is asymmetrical: As the classic line goes — blue team needs to protect against everything, red team only needs to succeed once.
I see a few opportunities for remediation:
- Limit the spread of monocultures. Stuff like Alva Linux and AWS’ Open Distribution of ElasticSearch are good, partly because they keep widely used FOSS solutions free and open source, but also because they inject technical diversity.
- Reevaluate project governance, organization and funding with an eye toward mitigating complete reliance on the human factor, as well as incentivizing for-profit companies to contribute their expertise and other resources. Most for-profit companies would be happy to contribute to open source because of its openness, and not despite it, but within many communities, this may require a culture change for existing contributors.
- Accelerate commodification by simplifying the stack and verifying the components. Push appropriate responsibility for security up into the application layers.
Basically, what I’m advocating here is that orchestrators like Kubernetes should matter less, and Linux should have less impact. Finally, we should proceed as fast as we can toward formalizing the use of things like unikernels.
Regardless, we need to ensure that both companies and individuals provide the resources open source needs to continue.
source https://techcrunch.com/2021/07/18/the-end-of-open-source/
No comments: