In 1897, Guglielmo Marconi obtained the first patent for a radio-based wireless telegraphy system. It was extremely basic, being able to do no more than transmit and receive Morse code, but that was enough to save about 700 people when the Titanic sank as early as 1912. Six years before that, in 1906, was the first musical radio broadcast. Innovations were coming as quickly as engineers could churn them out, and one particular sort of innovation came quicker than any other. In 1903, just six years after its invention, Marconi radio was hacked.
In brief, a rival inventor named Nevil Maskelyne broadcast rude messages at high enough power to swamp Marconi’s carefully calibrated detectors, ruining a live demonstration at the Royal Academy of Sciences. It wasn’t a particularly elegant intrusion, but it did its job. It certainly embarrassed Marconi, who had been claiming that other peoples’ signals could not interfere with his system. As early as 1903, the “hacker instinct” that the late 20th Century would appropriate and mythologize was there.
The Maskelyne hack didn’t hurt anyone. It just made some fancy people look dumb. Fundamentally, there wasn’t much worse that could have been done with land-based, demonstration radio, because it was up to Marconi what he would do with the information he received. The worst that Maskelyne could have done was exactly what he did: send Marconi some bad information and let events take their course. Historically, communication networks, from carrier pigeons to semaphore tower lines, have functioned in this way: you have a self-contained system at one end, run by humans, which generates a message to send to the other end, where you also have a self-contained system run by humans. You can do no worse than convince a human, whose mind can eventually be changed, of an untrue fact.
With computers, hacking has gone through a paradigm shift. Computer systems combine the two functions of communication and decision-making. They will receive information and then act on it without ever informing a human, which is as it should be: imagine having to OK every single line of code for every software update. However, this also means that, if you can convince a computer of an untrue fact, you can cause serious actual damage as the computer immediately acts on that information. This fact, as I am sure you might have expected, brings us to Bluetooth chastity cages.
Earlier this month, security researchers found a flaw in a Bluetooth-enabled chastity cage which they could use to lock it remotely without authentication, in such a way that the legitimate owner could not regain control and would be faced with the prospect of using an angle grinder or a pair of bolt cutters to take it off. Thankfully, as far as the media are aware, the researchers found this vulnerability before any criminals did and a fix has been released, but it is easy to imagine a parallel universe in which the consequences might have been worse.
These sorts of news stories are fairly common these days, with vulnerabilities being found in everything from smart coffee makers to smart locks. The Internet of Things—the current vogue for putting app-enabled microchips in consumer products whether they need it or no—has given computers control not only over digital data, but also over the physical world. This goes beyond sex toys: in 2015, researchers took control of an internet-connected Jeep Cherokee and were able to kill its engine from miles away. As everything becomes connected, everything becomes vulnerable, and the consequences can put people in hospital or in a morgue.
There are three possible responses to this, and only one of them is defensible. The first is to hope that you won’t become a target for hackers. To this, I need only bring up Marconi: he didn’t even have a computer, and he got hacked. The urge to subvert complex systems is written deep in the human psyche, and calling it a new thing or a passing fad is simply ignorant. The second response, equally common and equally wrong, is to claim that your code is secure. Well, I might not be able to make your “Hello World” code empty your bank account, but software is more complicated than that.
I was going to pontificate on this for a while, but I didn’t think I could do any better than just giving an example. Let’s say you and a friend are working together in a Zoom room with cameras off. You want to make sure your friend hasn’t fallen asleep and isn’t answering your questions on autopilot, so every so often you tell him a random word, like “hat,” and ask him to repeat it back to you. To make sure he gets the right word, you tell him how many letters it has. A few years ago, the vulnerability in this algorithm exposed passwords across the internet. It’s called Heartbleed. Did you spot it?
Between humans, the algorithm works perfectly fine. In computers, however, it’s a different story. If you and your friend are both computers, you store everything you know in memory. So, you tell your friend “hat, three letters” and your friend stores “hat” in memory and then reads three letters out of memory. Very good. But, if you tell your friend “hat, two thousand letters,” your friend, if he’s following the algorithm exactly, will read back “hat” and then 1997 letters worth of whatever else is in his brain. Now imagine that your friend is Dean Khurana. 1997 letters worth of Dean Khurana’s private thoughts would probably contain some pretty serious confidential information.
It sounds like a dumb mistake to make when I explain it, and there’s an obvious fix, and the internet is now mostly safe from this bug. But my point is that it’s not obvious to find until it’s been spotted and explained. OpenSSL, the system suffering from this vulnerability, has been worked on for decades by some of the brightest minds in cybersecurity. If you think you can do better, on a budget, with venture capitalists hassling you to get release 1.0 out the door, in a company that makes app-enabled sex toys, you’re kidding yourself. All computers are hackable.
This leaves us with only one response: don’t put computers in everything. Let me be clear. I like computers in most things and I think Internet of Things tech is very cool, when it’s not in something that could hurt me if it goes wrong. On the other hand, I think it is not unreasonable to demand that the things that could hurt me, from chastity belts to cars, are as dumb and therefore as unhackable as possible. I want my car to do what I say, and the only way to make sure of this is to remove all other brains, real or artificial, from its control system.
I would argue that not starting companies to make potentially dangerous Internet of Things products is not just a smart business move, but is instead a moral duty. Food manufacturers have a moral duty to manufacture edible food, and, if they are not completely confident that their food is not poisonous, they should not ship it. Similarly, hardware manufacturers should understand that, if they are not completely confident that a hacker could not cause serious bodily harm to someone remotely by compromising their device, they should not ship it. As Harvard students, we often end up in leadership positions in tech companies, and I am calling on all of you to make sure that Internet of Things devices are designed on the understanding that, sooner or later, they will be compromised.
This has not been a serious problem for much of history. A corrupted radio message can be fixed by sending another one on a different frequency. Bank computers will log all transactions and can reverse many in the event of identity theft. Recently, though, vulnerable devices have moved from the ethereal, mathematical world of software, where everything has an undo button, to the squishy, messy, all-too-human world of hardware. People are no longer merely being inconvenienced by hackers. Soon, they will be murdered. Prevent murder. Don’t join badly-thought-out Internet of Things start-ups. I don’t think that’s too much to ask.
Michael Kielstra ’22 (pmkielstra@college.harvard.edu) owns devices which run four different operating systems. None of them drive his car.