To hack: to study a system’s flaws and emergent properties, and use them for your own ends; to instil your own instructions into a computer’s memory, and coerce its microprocessor to run them. To pick at the air gaps and missed stitches in the many overlapping layers of software from which our modern world is woven.
Et voilà, an entire industry, employing countless thousands. Information Security a.k.a. infosec. It is said that there are four PR people for every journalist in America, which seems high, but I expect the ratio of infosec people to actual hackers is higher yet, even if you count the proverbial script kiddies.
For a long time it was where the counterculture techies went, the curmudgeons, the renegades, in black boots and leather and tattoos and colored hair. By no coincidence they also tended to include many of the smartest ones. (I’m a CTO and to this day I find interview questions about security are the best way to delineate the merely good from the excellent.) And by no coincidence they also included many angry, wounded, and/or terrible people.
That was when the Internet was something people used from time to time, rather than the fundamental substrate of half of human activity. It was OK, as far as its users were concerned, for its walls to be built and defended (and only very rarely womanned, courtesy of infosec’s default oppressive, exclusionary, and often predatory sexual culture) by a cohort of … well … cranky assholes. Not all of them, I hasten to stress. But definitely a disproportionate number.
That was part of its appeal, in many ways. Bad boys in leather who could spin up hard drives and ransom data from across the planet with a few opaque, wizardly shell scripts, in green text on black, using knowledge they’d won the hard way from online duels and grimoires — that was the Hollywood myth of the hacker, and the much-less-romantic real hackers loved it, as you’d expect, whatever color their notional hats might be.
It was a shitty system and a shitty subculture in many ways — colorful and dramatic, sure, but essentially shitty — and it couldn’t last. Nowadays it is big business, on the one hand, and slowly becoming more equitable and less exclusionary, on the other. Don’t get me wrong, there’s much work to be done, but the trajectory is a hopeful one.
Nowadays the security biz is an iterative process rather than an exploratory frontier. Researchers discover vulnerabilities in software; they disclose them to vendors; vendors grumble and fix it. Security vendors offer a growing arsenal of tools to prevent, detect, log, and attribute attacks, iterating as attackers do the same — and attackers are, increasingly, likely to be 9-5ers paid by a nation state, rather than members of a criminal enterprise.
One of the most respected teams in infosec is Google’s Project Zero, and another is their Chrome security team; both are managed by Parisa Tabriz, who gave the keynote speech at Black Hat today. She pointed out that there has been good and measurable progress in the security world over the last few years. Initially, when Project Zero started giving vendors precisely 90 days to fix their bugs before their exploits were revealed to the world, only 25% complied in time; now that number is up to 98%. Secure HTTPS traffic has risen from 45% to 87% of traffic on ChromeOS, and from 29% to 77% on Android, just over the last three years … and Tabriz attributed this to UI improvements in the Chrome browser as much as to the behind-the-scenes plumbing work.
Once upon a time UX and usability were considered entirely orthogonal to security. This is probably directly attributable to the contemptuous attitudes of infosec at the time. Now, thankfully, the industry knows better. Once “community” was a dirty word among the black-clad lone wolves, and if a “vulnerability” was personal, you didn’t talk about it; now there’s an entire Community Track at Black Hat, discussing addiction, stress, PTSD, burnout, depression, sexual harassment/assault, among other issues that would have been swept under the collective rug not so long ago.
Conventional wisdom has it that everything is terrible and everything can be hacked, and that “attackers have strategies while defenders only have tactics,” to quote Black Hat founder Jeff Moss this morning. And don’t get me wrong: some things do continue to be terrible. (Border Gateway Protocol, anyone?) But there is room for a kind of guarded optimism. Many of the big new hacks of the last few years aren’t catastrophic flaws in widely used essential infrastructure. OK, some are, but some, like Meltdown/Spectre and Rowhammer, are astonishingly elaborate Rube Goldberg hacks.
This is an extremely good sign. In the same way that airline crashes tend to have a baroque set of perfect-storm causes these days, because the simple errors are guarded against with multiple redundancy, the increasingly baroqueness of major bugs suggests that the software we use is getting noticeably more secure. Slowly. In irregular fits and starts. Over a period of decades. Sometimes in devices which cannot be fixed except by complete replacement. And reducing vulnerabilities still doesn’t fix, say, the password reuse problem. But still.
We’ll see if the rise of machine learning causes a new arms race, or whether it gives us new and better tools against attackers, and/or whether convolutional pattern recognition will unearth an entire new crop of previously undetectable bugs. It’s admittedly worrying that adversarial examples are so effective at tricking current AI models. But even so I’m inclined to agree with Tabriz that there is, at long last, cause for a certain guarded optimism, both for the infosec community and their work.
Source: TechCrunch http://j.mp/2Oqt7KD
No comments:
Post a Comment