Rats and The Galaxy; en.wikipedia.org/wiki/Rat#/media/File:Rattus_norvegicus_1.jpg and en.wikipedia.org/wiki/Galaxy#/media/File:NGC_4414_(NASA-med).jpg
Yet another shooting in the United States; in Europe, people have used trucks as battering rams, and plowed into crowds – all pretty horrible stuff. The granddaddy of them all, the attacks of September 11, 2001, used passenger planes as human guided cruise missiles. Friedman’s The Lexus and Olive Tree introduced us to the term “Super-Empowered Angry Men.” But no matter how many laws or safety widgets are put on technology, people who are determined to cause pain and suffering will always find a way. This has been summarized in the phrase, “The bomber will always get through.” And although it was originally intended to mean actual aircraft used during wartime, it has been applied to terrorists as well. One recent article mentions how terrorism has gone “low-tech.” Technology changes the means of delivery, but not the tactic – terror.
No matter how badly technologies are abused for terror, there are some that won’t be abandoned; we don’t ban cars and trucks, even though they are used in such ways. Likewise, as technology ramps up and gives more “bang for the buck,” these things (CRISPR is a scary one) will most likely be used in even more horrific ways.
There are a few paths to take, given this increase in personal power which can be twisted to nasty ends. For one, some person or group of persons will do something really horrible, like release a 99.99% fatal disease with high transmission rates and long incubation times. This would essentially make civilization collapse, and thereby removing the capability for super empowered men to exist – the problem solves itself, in a crude manner. The other, less likely scenario, is that people won’t want to do such horrible things, because of social restrictions and cultural mores. A possible third option is that terrorism may remain, but at such as scale that it becomes just another way of dying.
The second option might seem a bit impossible; could people willingly not want to use all means at their disposal to do horrible things? In some ways, this has happened already – we generally don’t worry about our neighbors blowing up our cars every morning, and many decades ago, kids actually brought their hunting rifles to school, as it seems many can attest to. A search on “kids used to bring hunting rifles to school” turns up some mind-boggling (in this day and age) stories about how routine it was to do such things.
What do you think wins out? The collapse of civilization (stairstep or catastrophic) where these technologies can’t be used, or, does use of technologies in a deliberate, terroristic way actually cause the collapse? The loss of cheap energy, biosphere, etc. would be the cause of the first kind of collapse. The situation where someone deliberately uses something like an asteroid mining ship to push an asteroid into a collision course with the Earth is the second kind of collapse – the super-empowered angry man on steriods.
Both are horrible, of course; nobody sane wants to see lots of people die, and most do not want to lose their creature comforts. If fewer people had cause to revolt, it might lessen the probability that someone would go off the rails, and the use technologies for nefarious purposes. Unfortunately, the possibility that people will die because someone will use technologies for nasty things is still non-zero.
Let’s look at this from a mathematical angle, and borrow a bit from the astronomy world. The Drake Equation is an argument that is used to estimate the number of extraterrestrial civilizations in the galaxy; it is:
… with R, fp, etc. being terms that multiply together to figure out this estimate. The Fermi Paradox makes you think that some of those terms are quite close to zero (or we’ve been quarantined). No matter what your opinion on the various terms (which can vary wildly, based on your assumptions), this equation gives us some sort of ballpark figure we can mull over. Likewise, the probability of people dying due to unnatural and human-directed causes could be estimated by some sort of ‘Cidial Drake Equation‘ (homocide or genocide; they are all about killing, hence the ‘Cidial’ – if anyone has a better or more accurate term, let me know) where:
N = R * fn * fz * fu/ fc
N= the number of people killed per time period
R = total population
fn = fraction of people such a weapon could kill
fz = fraction of people crazy or willing enough to use such a weapon per time period
fu = repeatability of the use of such technology (related to cost; a knife versus a gun)
fc = relative cost of a technology (in money, resources, time, technological base required)
The units of N are in people killed per time period, so dimensionally, fn, fz, fu, and fc will have to be corrected for this. The essence is still the same – the casualty rate depends on a variety of factors.
So, if a weapon costs a great deal, the chances of it being built or even used are small. Likewise, if the weapon can’t kill many people (like a single-shot rifle vs. an automatic weapon), the number of people that could be hurt is also reduced. The big wildcard is fz, though – the fraction of people crazy enough to do something horrible. If fz is zero, we could be surrounded by nuclear explosives all day, and all we might have to do is tell kids not to fiddle with them. We’d only keep them locked up so they couldn’t be set off by the curious or uneducated. If fz was large, we’d have to worry about locking up even the butter knives, and our population would be continually at war.
Could N be zero? If the population is small enough, perhaps yes – small communities generally don’t have the resources to build destructive weapons, and fz might be small due to social pressures. But as R becomes large, fc might drop, and fz, even if tiny, could still lead to large values of N.
- What else would you add to this oddball version of the Drake Equation?
- Such horrible calculus has been discussed in a few places, most notably, Fight Club:
Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don’t do one.
Is there something missing from this equation as well? The ‘out of court settlement’ bit is the financial cost to the company per incident, but certainly these effects can become non-linear. One failure is a fluke, but ten or one hundred might increase the payouts.
- What sort of cross-correlation is there between things like R, the density of R, fc, and fz? These variables may be interdependent. For example, as the density of rats in a cage (or people in cities) goes up, fz might become bit larger as well. Or would this close quarters living breed a new set of behaviours? The fz term might go (dangerously) up for a while, until it was bred out of the population, and then it reduced. The equation might be modified to handle both R (population) and something like resource consumption. Do wealthy people or societies have higher values of fz?
- The non-linearity of fz in simple models of population can cause some odd effects. If fz is non-linear with population size or density, what happens? What is the best way to reduce fz? Stay tuned.