Over the past year, I’ve read a fair number contemporary works of non-fiction. Above them all, three works stand out in their erudition, capacity to provoke thought, and self-improvement potential. The first was Thinking, Fast and Slow by Daniel Kahneman. The second was Intuition Pumps and Other Tools for Thinking by Daniel Dennett. And now comes the third – Antifragile by Nassim Nicholas Taleb.
The magic of Antifragile is that its central thesis is fundamentally simple but infinitely extensible. From this simple foundational premise, Taleb develops an entire cohesive worldview. It’s the ultimate litmus test of a book’s potential. Taleb’s entire work lives or dies by the validity of that central premise. Fortunately, his premise – the idea of antifragility – serves.
Taleb’s point is that one can view systems through the lens of fragility – that is, their response to volatility and variation. Fragile systems are harmed by volatility. Robust systems are indifferent to volatility. But antifragile systems benefit from volatility and variation. This mental model for thinking about systems is the leitmotif undergirding Taleb’s extensive corpus of related ideas.
There are several mechanics that can lead to a system being antifragile. Hormesis, for example, is one mechanism by which the body can actually benefit from small doses of harm visited upon it – such as strenuous exercise that tears the muscles, or vaccines that confer immunity to certain diseases by exposing our immune system to weak versions of those diseases. But the general idea of antifragility is one in which the disutility of bad outcomes from variation is outweighed by the utility of good outcomes from variation. In other words, given a stochastic distribution of variable outcomes, exposure to such variation benefits the system in the long term.
Usually, systems are antifragile over longer time horizons. Systems benefit when they are exposed to variation over a long time horizon, and have the ability to exercise what is called optionality – the ability to choose strategies to deal with the volatility. If such systems are able to retain information from each individual source of variability, over time, they will become more equipped to deal with the variations in their operating environment, as they will be able to exercise the correct responses to individual sources of volatility.
A simple example of such a system is life itself. Life is antifragile due to its capacity to evolve through the process of natural selection. Stochasticity is introduced through random mutations and changes in environment. Organisms maladapted to their environment die out – the small mistakes of the antifragile model – but successful ones propagate. The propagation of adapted organisms is the optionality inherent in the system. Natural selection therefore blindly chooses its survivors, its champions, to propagate life through the eons, refining organisms to adapt more and more to their environments.
The interesting implication of this idea is that systems don’t need to be intelligent to thrive. Systems in volatility can survive just by having the capacity to make a binary choice between one option and another, and the ability to retain information about that choice for future iterations of the option. Taleb rails against over-rationalizing systems by imputing generalized theoretical models to describe systems – he fears that the human tendency to spin narratives out of the workings of complex systems, which are inherently impossible to generalize due to their stochasticity, makes such systems fragile to error. This is because theoretical models are ill-prepared to predict edge cases and low-probability events, and therefore ignore them to the detriment of policymakers. In characteristically trenchant fashion, Taleb throws the whole economics profession into the sink, stating that the generalized economic models beloved of the profession are what leads to massive recessions due to their inability to predict, and therefore fend against, six-sigma events.
In the absence of theoretical rationalization, the “tinkering” mentality of trial and error with the ability to learn not to make wrong choices is, to Taleb, the superior strategy because it is totally empirical. And the most disturbing part is, you don’t have to be particularly smart to be successful at tinkering – no understanding of the theoretical underpinnings of the system is required, just a running tally of its responses to various input under various conditions.
This theme of wilful non-understanding has echoes in the field of machine learning. Machine learning is essentially an antifragile tinkering process by a dumb computer that doesn’t understand the data that it’s parsing – but it merely has the ability to recognize when it has gotten the correct prediction by comparing it with the validation set.
And non-understanding, ultimately, is the natural state of human affairs. Complex systems are beyond understanding or prediction. Instead of mastering the why of the system, it is better to understand the exposure of that system to catastrophe, and change our behavior to minimize exposure to potentially catastrophic outcomes. Taleb advocates the “barbell strategy” – distributing risk in such a way that your potential losses are bounded and small while your gains are unbounded and large. This could mean, for example, putting 90% of your wealth in safe stocks and betting 10% in risky derivatives, but writing off that 10% as essentially gone and factoring that loss into your prospecting horizon. That way, your maximum loss is bounded and known – 10% of wealth, while your potential gain is high and unbounded.
Essentially, instead of trying to predict when or how catastrophic events may occur, you merely adjust your exposure to them when they inevitably happen. Catastrophic events have a tendency to crop up in fragile systems where small perturbations lead to outsized, non-linear changes in state as a result of those outcomes. For example, a road that is at 90% load will be 10% faster than one at 100% load, but many, many times faster than one at 110% load. This hypersensitivity to small changes leads to outsize negative outcomes. Rather than predicting when a road may be at 110% capacity, which could result from random variables like an accident that closes a lane or a thunderstorm, you can instead modify your exposure to that risk by, say, taking the subway instead.
Another aspect of Taleb’s thought is the idea of via negativa, or antifragility by doing less. Taleb calls the harm caused by over-intervening in an additive manner to a problem iatrogenics and argues that in systems that are already antifragile – the environment, the human body – it is better to let the self-corrective mechanisms of those systems do the work rather than to have the work done for them. In this way, you strengthen the system by allowing it to benefit from micro-stressors, only intervening in the case of debilitating stressors. In practice, that means that if you have a mild cold or a backache, it is better to just sit it out than head to a doctor to get medication. There is also a principal agent problem here – a doctor is naturally incentivised to dispense care for the sake of his or her livelihood than to cure your condition, in the event that those two outcomes are not synonymous. That tends to mean over-medication for mild conditions in a bid to show the patient that they’ve done something – which can be as pointless as issuing antibiotics for influenza (and risking antibiotic resistance) just to placate the patient. As such, a better strategy is to rely on your body’s own mechanisms for small problems, and only seek medical care for large ones.
None of this is particularly original or even uncommon. But Taleb’s contribution is to systematize the mental model into a generalizable set of principles, thereby unifying the various disparate applications of antifragility under one umbrella. By understanding the whole picture, the reader is ostensibly better able to avoid the problem of domain dependence, which prevents the lessons of antifragility from being applied across different domains of experience.
The other aspect of Taleb’s book that I like is his unaffected disdain for the establishment and the orthodoxy, which he constantly attacks throughout the entire book. While irreverent and insulting, it certainly makes for amusing reading.
While I think antifragility is a useful concept, I would be wary of over-applying it the real world, as Taleb is wont to do. I’m not entirely convinced of Taleb’s point that bricolage/tinkering without theoretical knowledge is necessarily enough to make systems work. Antifragility/tinkering makes systems finely tuned to specific environmental contexts with the same patterns of stochasticity. When there is a change in those contexts, the antifragile system may fail to adapt in a way that suits the new circumstances without a theoretician coming in to interrogate the deep operating assumptions in the system. In other words, a system can get too comfortable in its own heuristics, and propagate useless behaviors or elements in the absence of a corrective theoretical hand.
Cultural traditions or patterns of behavior are one such example. Taleb says that patterns of behavior that persist usually do so because they are antifragile. And that may be true. Dietary restrictions, for example, could have arisen as a defensive measure against communicable diseases or parasites. However, over time, as food storage and preparation methods become safer, such injunctions lose their purpose but stay around because of cultural inertia. They are antifragile but vestigial. The bottom-line of this is that antifragility isn’t as clear-cut a ‘life-guide’ as you might suppose. Taleb says that it’s often useful to live by old cultural traditions, because their age indicates that they are antifragile by dint of having successfully stuck around for so long, and that they have some hidden mechanisms that benefit people that practice them. But antifragility may not benefit the practitioner as much as its primarily ‘interested’ (insofar as a meme has volition) in its self-propagation regardless of the utility to the practitioner – this is the selfish gene idea. As such, such customs are not necessarily useful rules to live one’s life by, especially given the opportunity costs associated with sticking to such rules. Such things need a touch of theory or empirical inquiry to better apprehend their utility (or lack thereof).
Another aspect where I’d have liked to see more development is the idea of ethics in antifragility. Taleb devotes an entire section to this idea that those who seek antifragility must have soul in the game – their antifragility must not be built on the fragility of others – they must be accountable for their decisions and not outsource the consequences to others – for example, bankers outsourcing risk to taxpayers by being “too big to fail”. But I’d have like to see more exploration of the question of whether antifragility is always a normative aspiration. Being evolutionary in nature, antifragility is often linked to concepts of “greater good” – it’s okay to make small sacrifices or accept small failures for the chances of greater success. But that success is built on necessary sacrifice. In medicine, unfettered human subject experimentation seems the more antifragile way to go about finding cures to disease – by indiscriminately trying therapies seeing which ones stick (and which are less than effective), and iterating from there. But of course, such an approach has its own ethical baggage. Is it right to sacrifice a few lives for the greater good, even if its benefits massively outweigh its costs? There are some things in which maintaining a fragile system may be the only ethical thing to do, regardless of efficacy.
Lastly, the part which I have most trouble with is Taleb’s brand of classical conservatism, which emerges as a natural outcropping of applying antifragility. As averred to above, time is a great filter of the antifragile, and Taleb disdains those with “neomania”, or the tendency to idolize the new and discard the old, even though the old, by dint of its age, is probably useful. Assuming for a moment that this might not necessarily be the case for reasons explained above, I have to question the attitude of “sticking to the old because it works”, which seems antithetical to the idea of antifragile bricolage. Surely we don’t want to discourage neomania so that the engines of progress can move forward at all speed, and the process of free-form iteration can proceed apace, and maybe create some things that will indeed stand the test of time with their antifragility. Just consider: Taleb recommends (half in jest) not reading books published in the last twenty years (excepting his own, I suppose), with the assumption that old books that still go to print are likely to be the ones worth reading. There is a lot of dross in new content that hasn’t been filtered, true, but if nobody does the reading then who is going to make that filtration process happen? This is why I have respect for first adopters of new technologies – they brave the technical problems and make the product better. That said, however, I do think there is merit in Taleb’s dictum that it is more accurate to predict the future by predicting things that will not exist in 20 years’ time than new things that will exist in 20 years’ time. It’s always better to say a thing will be replaced than to specify the thing that will replace it – the former is binary, the latter exponential.
It might seem that I have many issues with the book, but the truth is I couldn’t have enjoyed reading it more. It is one of the most engaging non-fiction books I’ve read in the recent past, and the sheer volume of my thoughts on it is demonstrative of just how much content there is to unpack in it. That is the virtue of a great book – one that provokes, not placates – and Antifragile is certainly in the former camp.
I give this book: 5 out of 5 nonlinearities