Capitalistic corporations don't like diversity
Note: this article discusses diversity as a system property, and not workplace diversity as the title might also suggest.
I’m currently writing about my new view on open source. In this post, I discuss the ideas which prelude these views. This might be useful to keep in mind for the context.
Update: the second post in this series is “On trust, information inequality, and open source technology”.
Diversity is fundamentally valuable
To me, diversity has intrinsic aesthetic value. Diversity is beautiful.
If it doesn't seem so to you, you can still appreciate that diversity is really important for sustainability and reliability in the long term. Donella Meadows wrote that diversity is a powerful factor of system self-organisation. This is a long quote, but please bear with it, every word is important:
Self-organization is basically a matter of an evolutionary raw material — a highly variable stock of information from which to select possible patterns — and a means for experimentation, for selecting and testing new patterns. For biological evolution the raw material is DNA, one source of variety is spontaneous mutation, and the testing mechanism is something like punctuated Darwinian selection. For technology the raw material is the body of understanding science has accumulated and stored in libraries and in the brains of its practitioners. The source of variety is human creativity (whatever THAT is) and the selection mechanism can be whatever the market will reward, or whatever governments and foundations will fund, or whatever meets human needs.
When you understand the power of system self-organization, you begin to understand why biologists worship biodiversity even more than economists worship technology. The wildly varied stock of DNA, evolved and accumulated over billions of years, is the source of evolutionary potential, just as science libraries and labs and universities where scientists are trained are the source of technological potential. Allowing species to go extinct is a systems crime, just as randomly eliminating all copies of particular science journals, or particular kinds of scientists, would be.
The same could be said of human cultures, of course, which are the store of behavioral repertoires, accumulated over not billions, but hundreds of thousands of years. They are a stock out of which social evolution can arise. Unfortunately, people appreciate the precious evolutionary potential of cultures even less than they understand the preciousness of every genetic variation in the world’s ground squirrels. I guess that’s because one aspect of almost every culture is the belief in the utter superiority of that culture. Insistence on a single culture shuts down learning. Cuts back resilience. Any system, biological, economic, or social, that gets so encrusted that it cannot self-evolve, a system that systematically scorns experimentation and wipes out the raw material of innovation, is doomed over the long term on this highly variable planet.
The intervention point here is obvious, but unpopular. Encouraging variability and experimentation and diversity means “losing control.” Let a thousand flowers bloom and ANYTHING could happen! Who wants that? Let’s play it safe and push this leverage point in the wrong direction by wiping out biological, cultural, social, and market diversity!
In Drift into Failure, Sidney Dekker also talks a lot about how diversity is important for resilience and for "drift into success". Greater diversity of agents in a complex system leads to richer emergent patterns (including for good qualities). It typically produces greater adaptive capacity.
Full optimisation of a complex system is undesirable because the lack of slack and margins can turn small perturbations into large events. Diversity of opinion helps to prevent exhaustive optimisation. (In machine learning, there is an equivalent idea: use an ensemble of models to prevent overfitting.)
There are many different kinds of diversity: biological, genetic, cultural, technological, diversity of ideas, etc. In the context of open source, I'm primarily interested in the ways to foster technological diversity.
Capitalistic corporations are misaligned with people and are against diversity
The dominant model of human organisation today is a for-profit corporation whose only success metric is the money profit for its shareholders. For example, here's the approximate distribution of the workforce in the US:
Large corporations (more than 500 employees): 45%
Small and medium-sized companies: 30%
Public sector: 15%
Non-profits: 10%
This is well-established that on the global system level, the current capitalistic model leads to inequality and "races to the bottom" of natural resources and planet's ecological capacity. Nowadays, people mainly argue about whether the (supposedly beneficial) economic growth brought about by capitalism outweighs the downsides and whether radically alternative models would be any better, but they don't argue about whether there are serious issues with the current implementation of capitalism at all.
If we look at specific companies, there are examples when they degenerate to something obviously detrimental to society and serve no one apart from their shareholders and their top management, not even the bulk of their own workforce. (Think about coal mining or industrial fisheries.)
Today's most progressive thinking about how to fix the capitalistic model revolves around replacing the purely financial metrics of the company's operation with some more complex and comprehensive metrics aligned with the sustainable development goals (see some examples).
I definitely support that companies must use these new success metrics. This will make them more aligned with people.
However, when we replace success metrics we don't still alter the basic mechanic of capitalism: that companies always seek growth and, ideally, monopoly. And this is directly opposed to diversity.
An "ideal" technological business today is a simple digital product with no cost of replication, developed by a small team of engineers, and which every human on the planet uses. Regardless of how sustainable and humane the product is (whether this is a fitness or a wellness app, or an app helping people to reduce their carbon footprint), there is very little diversity in this vision.
In the years to come, a lot of companies will try to achieve exactly that. They will attempt to build digital services that integrate into the life of as many people as possible. Some have already succeeded at this: think about Google controlling what can people find on the web, Facebook determining what do millions of people read, YouTube and TikTok tapping directly into people's novelty-seeking patterns, and Instagram reforming people's ways of relating to the world entirely.
Powerful monopolistic technology can impose significant risks on the world
In Antifragile and elsewhere, Nassim Taleb notes that the larger the system, the more fragile it becomes. But even he doesn't see serious issues with corporations seeking monopoly in the free market. Taleb thinks that large monopolistic corporations quickly collapse under its own weight, and new contenders will fill the niche: "as soon as a company enters S&P 500, it starts a suicide process".
However, taking in Taleb's own ideas of non-linear effects, I think that the most powerful technologies of the future can expose the world to significant risks (or cause significant harm via side effects) if they reach a sufficient adoption scale.
I think Taleb and many other people don't fully appreciate these risks because we don't yet have technologies powerful enough to make these risks clear.
We have only yet seen glimpses of such possibilities, e. g. in the case of Bitcoin, which I recently called a "totalitarian algorithmic system". One scary property of Bitcoin is that it is very hard to shoot down, even for such powerful entities as the United States and China (though not yet completely impossible, at least in theory: I think the United Nations or G20 can kill Bitcoin. But it looks very unlikely to me that they will want to do this). Taleb's assumption that the bigger the system, the more likely it will die soon could sometimes be false. This is quite realistic that some AI technology (and hence the company that develops it) could completely and irrevocably "capture" the world if was allowed to grow unchecked in the free market.
Another alarming feature of Bitcoin is how much harm (or just sheer effect, if you don't like to think about Bitcoin's effect on the world as harmful) it causes, given that it took for a single person, Satoshi Nakamoto, writing only a few thousand lines of code and a single white paper to start this system.
Perhaps, a different existing example of such a powerful technology is China's mass surveillance system (and the Social Credit System which rests on it). Notice how for now, technology should be obviously sinister for imposing appreciable risks on the world. But in the future, even the most benign-looking systems designed with the best intentions can carry hidden risks during their "reign" (e. g., some unanticipated bias or misalignment with an AI system can stir a degenerate political movement or harm the culture). The death of a useful technology itself can be risky because this event can cause a strong withdrawal effect.
It seems to me that Yuval Noah Harari is more concerned about these risks than Taleb. He discusses similar ideas in many of his recent talks and in 21 Lessons for the 21st Century.
As we see in the example of Bitcoin, the technology must not necessarily be proprietary and controlled by a private company to carry risks. Open source is not a panacea against creating something harmful; we must still apply our best thinking to design systems to bring about the best in humans. However, it seems that closed source, corporate-owned technologies are generally more likely to turn out damaging than open source ones.