On trust, information inequality, and open source technology
This is the second post in a series about open source. The first one was “Capitalistic corporations don't like diversity”.
Trustworthiness will become much more important than it is today
In 50 years from now, it's likely that the world's "economic activity" (I don't like this term—I would rather call it more generically, "the activity of humans and human-created entities") will mostly consist of information generation, exchange, and processing.
On the other hand, in the last 10 years, we saw more and more things appeared that undermine our trust in both the information that we receive and in the systems that collect, store, and process our data: deepfakes, online propaganda, fake news, election interventions, institutionalised privacy violations (NSA, other mass surveillance systems, Facebook, etc.), and data breaches.
Considering these two trends, I think trustworthiness will become a much more important factor for agents (people, organisations, automated systems) in choosing other agents to deal with.
What can we do with the coming information inequality?
If trustworthiness becomes more valuable, the knee-jerk reaction of the current economic system will be to put a price premium on it. And we see this happening already: subscriptions to more trustworthy media and data providers cost more than to those less trustworthy, data storage services (cloud, hosting, SaaS) sell extra encryption and security as add-on features in their highest-tier enterprise plans.
Within the current economic system, this seems reasonable that trust costs money. People who check the facts have to do extra work. Information encryption and decryption, checking that some information (e. g., a video) is not fake, and other verification procedures like that take extra processing cycles (which means extra energy expenditure).
However, the price of trust will ever go up in this system because as more elaborate algorithms for faking information or breaking encryption will appear, more processing will be needed to protect or verify the information. This is an arms race, and I'm afraid that in the long term, the "white hat" side is at a serious disadvantage, if not doomed: it's much easier to fake an image or a video or a text than to verify that it is not faked. Advances in quantum computing might turn the tables for the advantage of "black hats" in the area of encryption, too.
This means that if the economic system doesn't change people will have very uneven access to good information in the future. People already have uneven access, but this inequality will deepen, and having access to good information and trustworthy and secure service providers will become important in people's life. Information inequality might even become a bigger issue than wealth and political inequality.
People will only trust open source technology
In the future, technology will need to be open for people and agents to agree to use it. This primarily applies to software and AI technology, but not only that. In hardware, this applies to life-supporting and medical devices, safety-critical hardware, from avionics to high-voltage battery systems, robots, drones, and more. In process technology, this applies to food and medicine production, but also more broadly to any material production because people will want to verify the safety and sustainability of the materials that they use.
I have one interesting anecdote hinting at this trend. The technology in question is a battery management system (BMS) and battery analytics, far from the kinds of software that are usually invoked in the discussions of open source. I've recently spoken to people from one battery analytics startup that estimate battery's state-of-health based on the battery's telemetry signals. I asked them why would their customers (owners of the batteries or service companies) prefer their analytics solution over the analytics provided by the battery manufacturers themselves (such as Northvolt). They replied that their customers might not fully trust the battery vendors because vendors could be interested in presenting the batteries more worn than they actually are so that the batteries are decommissioned sooner and the vendor can charge money for recycling and sell new batteries. The concern described above reminds me of Batterygate and is probably inspired by it.
Currently, open source dominates only in the types of software whose users are developers themselves, such as programming languages (was Java the last major programming language which started with a proprietary compiler, in 1996?), frameworks (almost any flavour of them: from frontend and DevOps to Machine Learning), and blockchain. Other domains, such as big data processing systems and databases, are on the fences: there are a lot of open source technologies, but many strong proprietary players as well.
In still other areas of software development, and predominantly in hardware, food, and process engineering, people rarely build technologies in the open, and even when they do (see some examples of open-source hardware, robotics, beverages, etc.) they don't mention consumer trust as their motivation. I predict that this will change, albeit slowly, maybe in the next 20 to 30 years. People will begin to think that technology ought to be open source to be trustworthy, even if it is not a tool for programmers and not related to cryptography.
I'll go through most of the arguments against the open source model separately. Here, I'll discuss only the one which is directly related to trust. This is the argument that rogue actors can hack or abuse open technology more easily than closed technology: security through obscurity is a valid type of defence if it is not the only type used by a system. However, despite closed technology is potentially more secure, people will still likely trust it less than open technology because there is a long history of security breaches in proprietary systems. People will learn the lesson that developers of closed systems rely on obscurity too much and often neglect to protect the system properly.
Alright, this is perhaps not that black-and-write: I don't know if there are currently any algorithms for detecting computer viruses, spam, fraud, and fake information which can assume that the attacker knows the algorithm with all its details.
People will trust AI only if it belongs to them, not to a corporation
This idea is a natural companion to the previous one. As Stuart Russell puts it in Human Compatible:
People will trust personal assistants only if their primary obligation is to the user, not to a corporation.