The AI safety proposal “AI Scientists: Safe and Useful AI?” was published by Yoshua Bengio on May 7, 2023. Below I quote the salient sections of the proposal and comment on them in turn. Main thesis: safe AI scientists […] I would like to share my thoughts regarding the hotly debated question of long-term risks associated with AI systems which do not yet exist, where one imagines the possibility of AI systems behaving in a way that is dangerously misaligned with human rights or even loss of control of AI systems that could become threats to humanity. A key argument is that as soon as AI systems are given goals – to satisfy our needs – they may create subgoals that are not well-aligned with what we really want and could even become dangerous for humans.
Yes sir! I just recently read the book This Is How They Tell Me the World Ends by Nicole Perlroth and one cannot help but wonder where this is heading!
We already have people using the ChatGPT to write code and it seems inevitable that these NSA types are going to intentionally develop AI for hacking purposes. What then!?! It's going to be like the archetypal brother-battle in the cloud, battling AI's straight out of William Gibson.