Morphological intelligence, superhuman empathy, and ethical arbitration
Morphological intelligence (Levin, 2022) is the ability to solve problems in the morphological space, e.g., navigate to a target morphology of the species. For example, tadpoles with scrambled craniofacial organs remodel into largely normal frogs (Vandenberg et al., 2012), and geckos can regenerate missing limbs.
Another type of morphological intelligence is the ability to remodel oneself for the purpose of solving a different task in the 3D (traditional behaviour) space. An example is insect metamorphosis, which insects undergo to spread, find mating partners, and reproduce.
If we assume consciousness functionalism, undergoing a metamorphosis for the purpose of achieving a psychological goal of experiencing another agent’s qualia and emotions should be a possible feat for an AI which is at least as powerful as to be able to perform whole brain emulation (or, perhaps, whole organism emulation) and then live through an episode of the organism’s life in a simulated environment.
It’s unclear whether it is possible to retain and be able to retrieve a conscious memory of “what is it like to be a bat” (or human, or another form of AI) after the AI remodels itself back into its “base” morphology. The retention of symbolic or “indexical” memories should be possible (Hammelman et al., 2016), but the minimal physicalism theory of consciousness (Fields et al., 2021) suggests that interpreting any state (including recorded memories) through morphologically different computational networks (quantum reference frames) entails different conscious experiences1.
Regardless, even if retaining the memory of qualitative experience through morphological remodellings is impossible, it should be possible for the AI to emulate the brains of two organisms of different species at the same time and integrate them into a single conscious experience. This AI could live through a simulation episode with the experiences of two organisms that we want to weigh (for the purposes of ethical decision-making) both happen at the same time, e.g., a fish being caught and killed, and a human enjoying eating that fish in a restaurant. Then, we can ask this AI whether killing fish for people’s pleasure is moral or not (provided that the AI is also equipped with a scale-free theory of ethics).
Therefore, given the human brain’s low capability for morphological remodelling and thus low capacity for empathy on the absolute, non-speciesist scale, it seems ethically inappropriate that humans remain “in the driving seat” in what comes to ultimate moral decision-making in the future advanced civilisation. This also casts doubt on the idea of “coherent extrapolated volition”, which would, as usually conceptualised, be incapable of extrapolating beyond the human capacity for universal empathy2.
References
Fields, C., Glazebrook, J. F., & Levin, M. (2021). Minimal physicalism as a scale-free substrate for cognition and consciousness. Neuroscience of consciousness, 2021 (2), niab013.
Hammelman, J., Lobo, D., & Levin, M. (2016). Artificial neural networks as models of robustness in development and regeneration: stability of memory during morphological remodeling. Artificial Neural Network Modelling, 45-65.
Levin, M. (2022). Technological approach to mind everywhere: an experimentally-grounded framework for understanding diverse bodies and minds. Frontiers in Systems Neuroscience, 17.
Vandenberg, L. N., Adams, D. S., & Levin, M. (2012). Normalized shape and location of perturbed craniofacial structures in the Xenopus tadpole reveal an innate ability to achieve correct morphology. Developmental Dynamics, 241 (5), 863-878.
This post has been originally published on LessWrong.
Note: this is my interpretation of the minimal physicalism theory, not a statement made by the authors of the theory.
This is not my only reason to think that CEV doesn't make sense. Other huge (and, as it seems to me, irreparable) problems with CEV are the open-endedness of the civilisational journey and the inherent contextuality of ethical inferences and moral value.