Antifragile by Nassim Nicholas Taleb
A Critic's Meta Review: 4/5
Antifragile (2012), by Nassim Nicholas Taleb, introduces and explains hormesis, or the way that some systems benefit from the chaos and stress that normally would destroy fragile things.
Any cohesive system of cooperators or processes composed of smaller participating actors, from the human body to the stock market, can be fragile, robust, or antifragile. This is determined by the system’s response to stress, randomness, chaos, or extreme conditions. In the human body, stress can come from the invasion of a virus or the exertion of exercise. In a stock market, randomness and chaos come from speculators, regulation, or market collapses. Fragile systems require predictable, steady environments and protection from stressors. A rare but extreme event would disrupt a fragile system regardless of measures put in place to protect it. Robust systems exist in environments that range from neutral to chaotic. These systems are resilient to stressors; exposure to shocks will neither harm nor benefit them. Antifragile systems, which tend to be complex, thrive in chaotic environments and suffer when protected from stressors. Living ecosystems tend to be antifragile, in part because stress weeds out the weaker components, allowing the strong components to thrive.
Artistic and creative endeavors benefit from variability and random stresses. Chaos can increase focus and challenge individuals to improve their skills. People benefit from criticism as well as praise. Human bodies are antifragile throughout their life spans if they continue to use and challenge their systems. If they do too little work, are never exposed to contaminants, or have too little time to recover between stressors, they degrade. Like any antifragile system, a human body is complex, which means that the many components depend on each other in unpredictable ways.
Sometimes, the appearance of stability or robustness masks fragility, whereas systems that seem chaotic because their risks are easy to see can survive bigger shocks. The system that seems stable could actually be too tightly controlled, which can result in overcompensating reactions to randomness.
Overcompensation might take the form of interventions that have too little potential benefit and too much potential risk. Intervention is appropriate and reduces risk when it limits the size of a component in a system or when the potential cost of not intervening is large and certain. If the problem could resolve itself, procrastination is an acceptable approach.
By contrast, in an antifragile system, predictive data and interventions that reduce randomness are unnecessary. It is more important to recognize all possible vulnerabilities and reduce exposure to the negative impact of randomness, regardless of the probability of a rare but extreme event called a Black Swan. Predictions have their own vulnerabilities, making any system that relies on them fragile. Even if an intervention has a high probability of working, there is always a chance that it will fail because it did not occur organically in the environment.
One antifragile strategy entails identifying false assumptions in predictions that group cause and effect. In many cases, the research around cause and effect only accounts for instances where the relationship manifests and discounts instances where it does not. This variability means that trial and error is one of the most effective ways of innovating, and, rather than investing heavily in one intervention, an investor is better off making small investments in many different trials. Specialized knowledge, counterintuitively, can make predictions less reliable and prevent the experimental trials that would be the most effective.
The only time to engage in intervention is when the need is so great that the risk is justified. For a healthy individual or a system that is taking care of itself, the risk of intervening is too great. Particularly in personal health, exposure to randomness and occasional shocks is preferable to excessive intervention.
Systems that usually thrive in response to stress are made fragile when a participant forces others to carry more of the risk. On the other hand, some people prefer to increase their level of risk to ensure that others have more options in the event of a Black Swan. Exposure to risk should impact everybody in the system equally to prevent one individual from making decisions that disproportionately expose others. Individuals face the ethical challenge of managing personal risk without increasing the fragility of the systems in which they operate.