• CRYPTO-GRAM, August 15, 2024 Part 2

    From Sean Rima@618:500/14.1 to All on Mon Sep 23 21:22:58 2024

    I think of Solid as a set of protocols for decoupling applications, data, and security. ThatΓÇÖs the sort of thing that will make digital wallets work.

    ** *** ***** ******* *********** *************
    The CrowdStrike Outage and Market-Driven Brittleness

    [2024.07.25] FridayΓÇÖs massive internet outage, caused by a mid-sized tech company called CrowdStrike, disrupted major airlines, hospitals, and banks. Nearly 7,000 flights were canceled. It took down 911 systems and factories, courthouses, and television stations. Tallying the total cost will take time. The outage affected more than 8.5 million Windows computers, and the cost will surely be in the billions of dollars -- easily matching the most costly previous cyberattacks, such as NotPetya.

    The catastrophe is yet another reminder of how brittle global internet infrastructure is. ItΓÇÖs complex, deeply interconnected, and filled with single points of failure. As we experienced last week, a single problem in a small piece of software can take large swaths of the internet and global economy offline.

    The brittleness of modern society isnΓÇÖt confined to tech. We can see it in many parts of our infrastructure, from food to electricity, from finance to transportation. This is often a result of globalization and consolidation, but not always. In information technology, brittleness also results from the fact that hundreds of companies, none of which youΓÇÖve heard of, each perform a small but essential role in keeping the internet running. CrowdStrike is one of those companies.

    This brittleness is a result of market incentives. In enterprise computing -- as opposed to personal computing -- a company that provides computing infrastructure to enterprise networks is incentivized to be as integral as possible, to have as deep access into their customersΓÇÖ networks as possible, and to run as leanly as possible.

    Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customersΓÇÖ networks and machines is unprofitable -- at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. ItΓÇÖs also true for CrowdStrikeΓÇÖs customers, who also didnΓÇÖt have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.

    But brittleness is profitable only when everything is working. When a brittle system fails, it fails badly. The cost of failure to a company like CrowdStrike is a fraction of the cost to the global economy. And there will be a next CrowdStrike, and one after that. The market rewards short-term profit-maximizing systems, and doesnΓÇÖt sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) ItΓÇÖs not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes.

    The asymmetry of costs is largely due to our complex interdependency on so many systems and technologies, any one of which can cause major failures. Each piece of software depends on dozens of others, typically written by other engineering teams sometimes years earlier on the other side of the planet. Some software systems have not been properly designed to contain the damage caused by a bug or a hack of some key software dependency.

    These failures can take many forms. The CrowdStrike failure was the result of a buggy software update. The bug didnΓÇÖt get caught in testing and was rolled out to CrowdStrikeΓÇÖs customers worldwide. Sometimes, failures are deliberate results of a cyberattack. Other failures are just random, the result of some unforeseen dependency between different pieces of critical software systems.

    Imagine a house where the drywall, flooring, fireplace, and light fixtures are all made by companies that need continuous access and whose failures would cause the house to collapse. YouΓÇÖd never set foot in such a structure, yet thatΓÇÖs how software systems are built. ItΓÇÖs not that 100 percent of the system relies on each company all the time, but 100 percent of the system can fail if any one of them fails. But doing better is expensive and doesnΓÇÖt immediately contribute to a companyΓÇÖs bottom line.

    Economist Ronald Coase famously described the nature of the firm -- any business -- as a collection of contracts. Each contract has a cost. Performing the same function in-house also has a cost. When the costs of maintaining the contract are lower than the cost of doing the thing in-house, then it makes sense to outsource: to another firm down the street or, in an era of cheap communication and coordination, to another firm on the other side of the planet. The problem is that both the financial and risk costs of outsourcing can be hidden -- delayed in time and masked by complexity -- and can lead to a false sense of security when companies are actually entangled by these invisible dependencies. The ability to outsource software services became easy a little over a decade ago, due to ubiquitous global network connectivity, cloud and software-as-a-service business models, and an increase in industry- and government-led certifications and box-checking exercises.

    This market force has led to the current global interdependence of systems, far and wide beyond their industry and original scope. ItΓÇÖs why flying planes depends on software that has nothing to do with the avionics. ItΓÇÖs why, in our connected internet-of-things world, we can imagine a similar bad software update resulting in our cars not starting one morning or our refrigerators failing.

    This is not something we can dismantle overnight. We have built a society based on complex technology that weΓÇÖre utterly dependent on, with no reliable way to manage that technology. Compare the internet with ecological systems. Both are complex, but ecological systems have deep complexity rather than just surface complexity. In ecological systems, there are fewer single points of failure: If any one thing fails in a healthy natural ecosystem, there are other things that will take over. That gives them a resilience that our tech systems lack.

    We need deep complexity in our technological systems, and that will require changes in the market. Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. ThatΓÇÖs exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.

    How do we accomplish this? There are examples in the technology world, but they are piecemeal. Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives donΓÇÖt line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.

    Last weekΓÇÖs update wouldnΓÇÖt have been a major failure if CrowdStrike had rolled out this change incrementally: first 1 percent of their users, then 10 percent, then everyone. But thatΓÇÖs much more expensive, because it requires a commitment of engineer time for monitoring, debugging, and iterating. And can take months to do correctly for complex and mission-critical software. An executive today will look at the market incentives and correctly conclude that itΓÇÖs better for them to take the chance than to ΓÇ£wasteΓÇ¥ the time and money.

    The usual tools of regulation and certification may be inadequate, because failure of complex systems is inherently also complex. We canΓÇÖt describe the unknown unknowns involved in advance. Rather, what we need to codify are the processes by which failure testing must take place.

    We know, for example, how to test whether cars fail well. The National Highway Traffic Safety Administration crashes cars to learn what happens to the people inside. But cars are relatively simple, and keeping people safe is straightforward. Software is different. It is diverse, is constantly changing, and has to continually adapt to novel circumstances. We canΓÇÖt expect that a regulation that mandates a specific list of software crash tests would suffice. Again, security and resilience are achieved through the process by which we fail and fix, not through any specific checklist. Regulation has to codify that process.

    TodayΓÇÖs internet systems are too complex to hope that if we are smart and build each piece correctly the sum total will work right. We have to deliberately break things and keep breaking them. This repeated process of breaking and fixing will make these systems reliable. And then a willingness to embrace inefficiencies will make these systems resilient. But the economic incentives point companies in the other direction, to build their systems as brittle as they can possibly get away with.

    This essay was written with Barath Raghavan, and previously appeared on Lawfare.com.

    ** *** ***** ******* *********** *************
    Compromising the Secure Boot Process

    [2024.07.26] This isnΓÇÖt good:

    On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published whatΓÇÖs known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon/Ryzen2000_4000.git, and itΓÇÖs not clear when it was taken down.

    The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

    [...]

    These keys were created by AMI, one of the three main providers of software developer kits that device makers use to customize their UEFI firmware so it will run on their specific hardware configurations. As the strings suggest, the keys were never intended to be used in production systems. Instead, AMI provided them to customers or prospective customers for testing. For reasons that arenΓÇÖt clear, the test keys made their way into devices from a nearly inexhaustive roster of makers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro.

    ** *** ***** ******* *********** *************
    New Research in Detecting AI-Generated Videos

    [2024.07.29] The latest in what will be a continuing arms race between creating and detecting videos:

    The new tool the research project is unleashing on deepfakes, called ΓÇ£MISLnetΓÇ¥, evolved from years of data derived from detecting fake images and video with tools that spot changes made to digital video or images. These may include the addition or movement of pixels between frames, manipulation of the speed of the clip, or the removal of frames.

    Such tools work because a digital cameraΓÇÖs algorithmic processing creates relationships between pixel color values. Those relationships between values are very different in user-generated or images edited with apps like Photoshop.

    But because AI-generated videos arenΓÇÖt produced by a camera capturing a real scene or image, they donΓÇÖt contain those telltale disparities between pixel values.

    The Drexel teamΓÇÖs tools, including MISLnet, learn using a method called a constrained neural network, which can differentiate between normal and unusual values at the sub-pixel level of images or video clips, rather than searching for the common indicators of image manipulation like those mentioned above.

    Research paper.

    ** *** ***** ******* *********** *************
    Providing Security Updates to Automobile Software

    [2024.07.30] Auto manufacturers are just starting to realize the problems of supporting the software in older models:

    TodayΓÇÖs phones are able to receive updates six to eight years after their purchase date. Samsung and Google provide Android OS updates and security updates for seven years. Apple halts servicing products seven years after they stop selling them.

    That might not cut it in the auto world, where the average age of cars on US roads is only going up. A recent report found that cars and trucks just reached a new record average age of 12.6 years, up two months from 2023. That means the car software hitting the road today needs to work -- and maybe even improve -- beyond 2036. The average length of smartphone ownership is just 2.8 years.

    I wrote about this in 2018, in Click Here to Kill Everything, talking about patching as a security mechanism:

    This wonΓÇÖt work with more durable goods. We might buy a new DVR every 5 or 10 years, and a refrigerator every 25 years. We drive a car we buy today for a decade, sell it to someone else who drives it for another decade, and that person sells it to someone who ships it to a Third World country, where itΓÇÖs resold yet again and driven for yet another decade or two. Go try to boot up a 1978 Commodore PET computer, or try to run that yearΓÇÖs VisiCalc, and see what happens; we simply donΓÇÖt know how to maintain 40-year-old [consumer] software.

    Consider a car company. It might sell a dozen different types of cars with a dozen different software builds each year. Even assuming that the software gets updated only every two years and the company supports the cars for only two decades, the company needs to maintain the capability to update 20 to 30 different software versions. (For a company like Bosch that supplies automotive parts for many different manufacturers, the number would be more like 200.) The expense and warehouse size for the test vehicles and associated equipment would be enormous. Alternatively, imagine if car companies announced that they would no longer support vehicles older than five, or ten, years. There would be serious environmental consequences.

    We really donΓÇÖt have a good solution here. Agile updates is how we maintain security in a world where new vulnerabilities arise all the time, and we donΓÇÖt have the economic incentive to secure things properly from the start.

    ** *** ***** ******* *********** *************
    Education in Secure Software Development

    [2024.08.01] The Linux Foundation and OpenSSF released a report on the state of education in secure software development.

    ...many developers lack the essential knowledge and skills to effectively implement secure software development. Survey findings outlined in the report show nearly one-third of all professionals directly involved in development and deployment system operations, software developers, committers, and maintainers self-report feeling unfamiliar with secure software development practices. This is of particular concern as they are the ones at the forefront of creating and maintaining the code that runs a companyΓÇÖs applications and systems.

    ** *** ***** ******* *********** *************
    Leaked GitHub Python Token

    [2024.08.02] HereΓÇÖs a disaster that didnΓÇÖt happen:

    Cybersecurity researchers from JFrog recently discovered a GitHub Personal Access Token in a public Docker container hosted on Docker Hub, which granted elevated access to the GitHub repositories of the Python language, Python Package Index (PyPI), and the Python Software Foundation (PSF).

    JFrog discussed what could have happened:

    The implications of someone finding this leaked token could be extremely severe. The holder of such a token would have had administrator access to all of PythonΓÇÖs, PyPIΓÇÖs and Python Software FoundationΓÇÖs repositories, supposedly making it possible to carry out an extremely large scale supply chain attack.

    Various forms of supply chain attacks were possible in this scenario. One such possible attack would be hiding malicious code in CPython, which is a repository of some of the basic libraries which stand at the core of the Python programming language and are compiled from C code. Due to the popularity of Python, inserting malicious code that would eventually end up in PythonΓÇÖs distributables could mean spreading your backdoor to tens of millions of machines worldwide!

    ** *** ***** ******* *********** *************
    New Patent Application for Car-to-Car Surveillance

    [2024.08.05] Ford has a new patent application for a system where cars monitor each otherΓÇÖs speeds, and then report then to some central authority.

    Slashdot thread.

    ** *** ***** ******* *********** *************
    On the Cyber Safety Review Board

    [2024.08.06] When an airplane crashes, impartial investigatory bodies leap into action, empowered by law to unearth what happened and why. But there is no such empowered and impartial body to investigate CrowdStrikeΓÇÖs faulty update that recently unfolded, ensnarling banks, airlines, and emergency services to the tune of billions of dollars. We need one. To be sure, there is the White HouseΓÇÖs Cyber Safety Review Board. On March 20, the CSRB released a report into last summerΓÇÖs intrusion by a Chinese hacking group into MicrosoftΓÇÖs cloud environment, where it compromised the U.S. Department of Commerce, State Department, congressional offices, and several associated companies. But the boardΓÇÖs report -- well-researched and containing some good and actionable recommendations -- shows how it suffers from its lack of subpoena power and its political unwillingness to generalize from specific incidents to the broader industry.
    ---
    * Origin: High Portable Tosser at my node (618:500/14.1)