Discover more from Engineering Ideas
Will people be motivated to learn difficult disciplines and skills without economic incentive?
In a recent interview with ABC news, Sam Altman emphasised multiple times that he sees individual AI tutoring as one of the greatest applications of AI:
I'm most excited about is the ability to provide individual learning -- great individual learning for each student.
However, I’m very sceptical that this individual tutoring technology will be widely used in developed countries, and in developing countries one generation down the line.
Apart from the engaging tutor, learning cognitively demanding disciplines and skills requires motivation. The main motivations that people have for learning are:
Competitive: people learn skills to outcompete others in some (status) game. This might be linked to mating.
Economic: people learn difficult disciplines to get a job and earn a living. This motivation is based on fear of the loss of livelihood.
Intrinsic: people learn difficult disciplines because they are intrinsically motivated to learn the subject, or find the process of learning difficult skills intrinsically rewarding. The latter could also be seen as self-actualisation motivation.
Altruistic: people learn skills for doing good.
(GPT-4 also suggested there is a “social” motivation of “fitting in”, but I think it is hardly the case: is there any society where people are ostracised for not possessing difficult skills? I doubt so. This is especially so in the social groups of children, where, if anything, the opposite is the case: kids may be ostracised for trying to become “too smart”, or “learning too much”.)
Outside of the contexts where competitive motivation is a proxy for economic success (and, by extension, mating success), competitive motivation seems to primarily apply in the context of games, such as poker, chess, and Counter-Strike. Indeed, AI tutors have the potential to increase the competition and skill of human players, as this has already happened in chess.
On the other hand, economic motivation for learning will crumble. In a few years, the whole population of developed countries will realise that we head towards complete automation of all cognitive labour. Universal basic income will appear inevitable. Kids will realise this, too, they are not stupid. By extension, the mating motivation that was conditioned on learning difficult skills (”learn math or programming” → “find a high-paying job or start a business” → “attract a good mate”) will crumble, too. And so will the altruistic motivation: when AI will become superhuman, assuming it will be aligned, it will be most altruistic to not try to “help” it with any cognitive tasks.
I believe that true intrinsic motivation for learning is either very rare or requires a long, well-executed process of learning with positive feedback so that the brain literally rewires itself to self-sustain motivation for cognitive activity (see Domenico & Ryan, 2017). Almost certainly, this requires being raised in a family where someone already has intrinsic motivation for learning, and is very determined to develop such motivation in the kids. So, we cannot expect the percentage of people who are intrinsically motivated for learning will grow, outside of the tiny percent of those who are naturally born with this predisposition.
Realistic futures of motivation and education
In the world with UBI and complete cognitive automation, it seems more or less plausible to me that humans will channel their activity in the following directions:
Physical games and competitions: football, basketball, mountaineering, bodybuilding, surfing.
Cognitive games and competition: poker, chess, eSports.
Learning advanced science: in the world where AGI will develop science that in principle will be far beyond the capacity of unaided and unmodified human brains to grasp. People who are intrinsically motivated in learning will just try to climb this ladder as high as possible. Or, this could also become a sort of status game, albeit likely a very niche one.
Manual and physical labour: in the world with UBI, the labour that is still not automated should automatically become highly paid to motivate anyone to do it; if the world will become (almost) abundant by this point and money could not be a serious motivation at all, it must also become socially praised and a status job.
Spirituality and compassion: I can imagine that some people will respond to the meaning crisis by turning towards meditation, Buddhism, or developing as much love, gratitude, and compassion for everyone and everything that surrounds them as possible.
Debating beauty (art, fashion) and developing the tastes for beauty and art. If the value of art is fundamentally subjective (rather than objective, as David Deutsch conjectured), even though the art itself may be created mostly by AI, people can indulge in open-ended development or artistic thought endlessly: this is effectively a random walk on the spaces of form and style. If the beauty and/or value of art is in some sense objective, this could be turned into a “climbing the ladder” competitive exercise, akin to learning advanced science.
Just being frustrated and/or addicted to food, sex, or simple visual stimuli like TikTok or games.
If you are pessimistic, you may think that the last point will dominate the condition of humanity post-AGI. One can even argue that the downward spiral of depression, frustration, and addiction will inevitably lead to the extinction of the human race if humanity will not merge with AI in some way because either such technology will prove to be infeasible, or because AGI will not permit the humanity to do this for some reason.
Most of the directions outlined above require excellent AI tutors Altman is excited about creating. The exceptions are “cognitive competition”, “learning advanced science”, and, to some degree, “developing the tastes for beauty and art”. If a large proportion of humanity will indeed engage in these types of activities in the post-AGI future, I would consider this a very good outcome of the AI transition, but it intuitively seems to me that this outcome is very unlikely, even conditioned on solving technical alignment and solving the governance and global coordination challenges that will surround the AI transition.
To sum up: my P(at least 30% of all people are flourishing by learning and doing other cognitively difficult stuff|no human-AI merge, AI is aligned and doesn’t kill everyone) is less than 10%.
Does anyone know the opinions of expert sociologists, psychologists, educators, and anthropologists on this topic?
If Sam Altman truly believes that the future in which a lot of people thrive by learning is likely and OpenAI strategy leads to such a future, I think either his thinking is flawed, or "AI tutors for everyone” is just a convenient slogan from the perspectives or marketings and politics rather than his actual belief about the future.
I don’t see how OpenAI’s product and licensing strategy could be particularly different so that it nears the “enlightened future” with a higher probability than otherwise. For example, I think that their licensing of generative AI technology to Snapchat and Microsoft Copilot are both bad for society at least short- and mid-term. On the other hand, the first might be good for OpenAI financially, while the second was vital for OpenAI financially and will probably be good for the economy according to some metrics, though bad for the resilience and antifragility of the economy (business processes dependent on OpenAI → OpenAI is down → half of the economy is down). However, all these factors don’t seem to me to be directly impacting the prospects of the “enlightened future”.
I’m not sure anything can actually bring about the “enlightened future” (again, conditioned on the case when the human-AI merge will not happen). If this is the case, I think it was more truthful on the part of Altman to say “I’m excited for the AI to automate all labour; then, if the human-AI merge will be feasible, we can tap into unfathomable knowledge. If not, or if aligned AI will not permit us to do this, we can at least meditate, play, and live happy, untroubled lives, while the minority of geeks can indulge in trying to learn the finest theories of physics and science, developed by the AI”.
For similar reasons, I’m sceptical of Altman’s invocation of “creativity”:
[…] human creativity is limitless, and we find new jobs. We find new things to do.
New things? Yes. New jobs? No need to call these new things “jobs”. Will these “things to do” be creative? I doubt so. Only subjective artistic exploration may count as such, but, as with learning advanced science, I think only a small portion of the population is intrinsically motivated by artistic exploration.
Please forgive me that this classification doesn’t follow any convention from the psychological literature, such as self-determination theory. Upon reading Chater’s “The Mind is Flat”, I’m very sceptical of the scientific standing of any such theories. For storytelling purposes, the classification proposed above is more convenient than SDT.
There are also gradations of this disposition. I consider myself perhaps within 10% of the most intrinsically motivated learners in the population, yet I struggle to read dry books and papers for the knowledge contained therein alone. Pop science videos on YouTube and podcasts are more of a form of entertainment than media for learning difficult disciplines and applying serious cognitive effort.