An LLM-based “exemplary actor”
Into and summary
This post is the second section of "Aligning an H-JEPA agent via training on the outputs of an LLM-based "exemplary actor", posted separately because I think it could warrant a separate discussion, largely independent of the discussion of H-JEPA agent with GFlowNet actors. Here's the summary of this post, copied from the "Overview" section of the main article:
In section 2, I describe the “exemplary actor”, an LMCA (language model cognitive architecture) that takes a simple, “brute force” approach to alignment: a powerful LLM (think GPT-5/6 level, with a vast, or quasi-unlimited context) is given a list of “approved” textbooks on methodological and scientific disciplines: epistemology, rationality, ethics, physics, etc. Also, the LLM is given tools: narrow AIs (such as for protein folding or for predicting properties of materials, or for formal scientific modelling). Finally, the LLM is given a compute engine such as Wolfram and a knowledge base such as Wikidata or Wolfram Knowledgebase.
The exemplary actor creates plans or predictions for given situations (described in language and fed to the LLM underlying the exemplary actor as prompts) and iteratively critiques and refines its own plans and predictions while putting different textbooks into the LLM context (first, with the textbook on rationality, then epistemology, then physics, etc., with potentially dozens of different textbooks relevant for a plan or prediction that is being criticised), for many iterations, until convergence.
In section 2.1, I note that the type of alignment that the exemplary actor’s architecture tries to ensure is called (world) model alignment and that is stronger and also more essential than goal alignment.
Then, I discuss the properties of the exemplary actor. In section 2.2., I discuss what I see as likely non-issues or straightforwardly addressable issues: the “divergent reasoning nature” of LLMs, the lack of grounded common sense reasoning, and the bias of the quick reactive network (”System 1”), it it is added to the architecture to make it more practically usable in lower-stakes reasoning settings.
In section 2.3, I discuss the outstanding technical issues and risks of the exemplary actor’s architecture:
The risk of direct access to the underlying LLM (section 2.3.1).
The exemplary actor’s reasoning could still be partially directed by “alien” thinking patterns (i.e., the world model) of the underlying LLM even though these influences won’t surface in the explanations of the plan (section 2.3.2).
Iterated critique and refinement probably won’t make plans strictly conforming to the theories described in the textbooks (section 2.3.3).
In section 2.3.4, I discuss the alignment tax of the exemplary actor (compared with the baseline of a bare, minimally fine-tuned LLM) and conclude that the main source of alignment tax might happen to be the theory of ethics which may force the exemplary actor to refuse to participate in “games” (i.e., real-world situations and environments) where it doesn’t see ethical ways of “winning”, and thus will consider inaction (or some form of palliative action) the only ethical way forward. This is not a technical problem with the exemplary actor per se, but rather a problem with a higher-level system, i.e., the current economic, social, and political structure of the world. I mention this and other kinds of “higher-level” risks of the plans to build and deploy the exemplary actor (i.e., roughly the plans that OpenAI and Anthropic are betting on, as it seems to me) in section 2.4.
2. An LLM-based “exemplary actor”
Let's assume that we have three things:
First, a very powerful auto-regressive LLM (think GPT-5/6 level) with the ability to effectively attend to or to “keep in mind” hundreds of thousands of tokens, either through sparse attention, multiscale decoding (Yu et al., 2023), Unlimiformer-style techniques (Bertsch et al., 2023), or whatever. The LLM can make correct semi-formal[1] inferences (e.g., for criticising and refining its own output), picking up the right information from the context.
Second, a bunch of narrow AI tools (or good old algorithms, or “compute engines” like Wolfram) for specific types of problems, such as a GFlowNet for generating principled (scientific) causal models from data. The LLM is trained to use these specialised AIs, i.e., is augmented with them (Mialon et al., 2023). One of the tools should be a knowledge base, such as Wikidata or Wolfram Knowledgebase.
Third, textbooks on SoTA theories of philosophy, math, physics, game theory, control theory, cognitive science, rationality, epistemology, consciousness science, ethics as science[2], etc. All these theories are reasonably harmonised with each other and are connected to each other, so that, in the language of David Deutsch, these theories should be hard to vary because they are constrained by each other.
Note that currently, SoTA theories in most of the aforementioned fields of science are either very unsatisfactory (e.g., there are currently no satisfactorily developed scientific theories of consciousness and ethics) or aren’t harmonised with each other. Developing new good theories and harmonising them all with each other will be the job for scientists to do at the AGI labs that will develop the sufficiently powerful LLM first (or a CoEm, which, I suspect, is planned to be a sort of wrapper around a somewhat less powerful LLM)[3]. In other words, I assume that the language model cognitive architecture (LMCA) built around the LLM will be powerful enough to do highly abstract theoretical science either autonomously or with minimal supervision by human researchers.
If we have these three ingredients, we can build an AI that generates textual descriptions of "exemplary" (i.e., perfectly ethical and “aligned”) plans from textual situation descriptions. Below, I call this AI an “exemplary actor”.
Here’s the exemplary actor’s algorithm:
Input: A textual description of the situation for which the exemplary actor should generate a plan.
Generate the first draft of the plan, together with explanations for why this plan is optimal.
While (the plan is refined since the previous iteration at least according to one textbook):
For (every textbook in the knowledge base):
Load the textbook into the context of the LLM along with the current plan and ask the LLM to generate a critique of the plan from the perspective of the theory, described in the textbook.
Check if the critique isn’t “forced” and is at least minimally substantial, to ensure that the algorithm converges. If the critique is forced (artificial) or too insubstantial, reject it and move to the next textbook.
Add the critique to the context and ask the LLM to refine the plan and the explanation of it, considering the given critique. Make the refined plan a new current plan.
Return the latest version of the plan and the explanation.
When we discuss the exemplary actor described above as a realistic thing, in effect, we also assume the “LLM optimism” view (cf. “scale is all you need” hypothesis): that is, we assume that some future versions of LLMs such as GPTs will be “smart”/capable enough to generate novel scientific theories and converge on methodologically and scientifically sound inferences (plans, predictions, explanations) though iterated self-critique and refinement, or do this at least as well as the smartest humans can.
2.1. Alignment on methodological disciplines and science is both essential and almost sufficient
Here and below, I also make a non-trivial[footnote: By “non-trivial” here I mean that this assumption doesn’t seem to be “guaranteed to follow” even if all other assumptions that I’ve made above are realised, namely that the exemplary actor will be capable enough to produce novel, sound theoretical science, attend entire textbooks in its context and criticise its own outputs constructively using the theories from these textbooks, etc.] assumption that if the exemplary actor iterates on its predictions and plans with self-critique and refinement while attending to textbooks on methodological and scientific disciplines, it converges on the plans and predictions that actually conform to the theories described in these textbooks. A short way of stating this assumption is “Natural language alignment is possible”. This assumption is discussed further in section 2.3.3 below.
In the post “Goal alignment without alignment on epistemology, ethics, and science is futile”, I’ve explained why the alignment on math, methodological disciplines (foundational philosophy, epistemology, rationality, and applied ethics), and scientific disciplines (physics, cognitive science, consciousness science, ethics as science, etc.) is both necessary and practically sufficient for goal and plan alignment, and, therefore, the outputs of the exemplary actor are indeed exemplary and aligned. Goal alignment[4] will follow from methodological and scientific alignment almost automatically.
2.2. Likely non-issues with the exemplary actor
Even though auto-regressive LLMs are an “exponentially diverging diffusion process” (LeCun), it seems at least plausible that an LMCA that iteratively critiques and refines its own inferences with good textbooks “in mind” should rectify this issue of LLMs: the iterative critique and refinement process should converge in the vicinity of the “correct” prediction, plan, or an explanation.
Another LeCun’s critique of LLM reasoning, namely that LLMs lack grounded common sense (Browning & LeCun, 2022) and, I extend their argument, LLM’s predictions of the future world states have a high risk of being severely biased by whatever linguistic simulacra happen to be most influential during the training of the LLM[5]. However, I think that iterated critique and refinement with methodology and science textbooks in the context, as well as using narrow AI tools for scientific modelling and using a reliable knowledge base will make abstract reasoning of the exemplary actor at least as practically grounded as human abstract reasoning.
2.2.1. The bias of the “intuitive” module for fast action selection
Nobody ever gets to execute clever plans prepared by the exemplary actor in their full elaboration. The plans will be filled with information-seeking (i.e., uncertainty-resolving) actions, and the information obtained through these actions is expected to often change the plan. For example, when the exemplary actor is presented with a situation, the very first action in their plan for it almost definitely will be reaching out to some agents involved in it and asking them further clarifying questions.
Any practical AI implementation should include a module for “fast” action selection. This module is particularly indispensable in dynamic situations when there is not enough time for iterated critique and refinement of plans, which may take dozens of minutes on modern LLM inference hardware[6]. The latency constraint will likely remain relevant for many years even if we ignore the high cost of the inference of the exemplary actor, or expect the cost to drop so dramatically that it won’t matter. LeCun (2022) calls the module for fast action selection “Mode-1” (after Kahneman’s “System 1”), and it’s often called the “habitual network” in Active Inference literature.
So, the potential issue here is that the LLM’s quick reactionary actions[7] in dynamic situations won’t align with the first actions that would be the outcome of “System 2”-like deliberation by the exemplary actor.
However, it doesn’t appear to me as an issue because we can fine-tune a version of the powerful LLM to react fast in dynamic situations using the exemplary actor to generate the training data. I don’t see why this fine-tuning should work worse than training the equivalent “Mode-1” policy module in the H-JEPA architecture, using the data from the more deliberative “Mode-2” module which uses the World Model to predict and optimise long-term plans and predictions. The fine-tuned LLM may be slightly biased in its fast actions, but so could be the “Mode-1” policy in H-JEPA, and it seems that these biases should be quite small and not seriously consequential anyways.
2.3. Open problems with the exemplary actor architecture
2.3.1. Direct access to the underlying LLM
In order to predict adversarial actions and generate the best plans, the base LLM of the exemplary actor must not be fine-tuned “for safety” so much that it is completely stifled and cannot speculate about adversarial plans “out loud”, i.e., in the output text rather than within the internal activation spaces of the auto-regressive LLM. Deliberately “pushing” auto-regressive LLM’s adversarial reasoning “inside” its activations while trying to maintain its adversarial planning capability[8] seems very dangerous, and will definitely require very good interpretability systems for supervision of “misaligned thoughts”, which then, of course, could itself leak adversarial reasoning if the attacker has direct access to the interpretability systems. Thus, if the attacker gains direct access to the LLM, the attacker could use it to generate some plans (or other dangerous knowledge, such as the DNA sequence of a supervirus) which could be very bad for humanity.
2.3.2. Alien world model
Another outstanding issue with the plan of creating the “exemplary actor” is that the powerful underlying LLM has an “alien” world model which is perhaps not understandable to humans even in principle because the factors/variables in this model may be connected much more densely that humans’ consciousness prior (Bengio, 2019) permits comprehending[9]. Given that the output modality (text) and the assumed means of explanation (scientific textbooks) are “human-like”, i.e., imply sparse causal models, if the LLM actually has an “inhumanly dense” causal world model, then it is not just likely, but is guaranteed to provide ”unfaithful explanations” for its plans (Turpin et al., 2023), where “faithfulness” means that the output text accurately reflects the thinking process that has led to this or that conclusion.
Still, I’m not eager to jump to a conclusion that this fact alone necessarily means that things will go wrong. LLM’s alien reasoning is “regularised with” human-understandable theories (while the risk that it will not be regularised with textbooks, i.e., the risk of direct access to the LLM is discussed above): if the LLM is a diligent critic of its own reasoning, it will find these “scientifically unfounded” conclusions and force itself to change them during the refinement of the plan or the prediction. Some “alien bias” may remain in the predictions and plans, though. See also section 4.2 for further discussion of this.
Even if the LLM will have enough situational awareness that it could “play cooperatively” in its generator, critic, and editor[10] roles within the exemplary actor’s deliberative process, I doubt it will be capable enough to sneak into its predictions and plans some steps that will advance its “secret self-serving goals or agenda” in a coherent way[11]. However, the situation is dangerous because the “alien” world model will be derived mostly from language, which re-surfaces the “grounded common sense” challenge that I classified as a likely non-issue in section 2.2.
Also, in this situation, the exemplary actor would be actually model-misaligned with humans, so it will fail to fulfil its original promise, even though this situation might not lead to a “catastrophe” because the ethics critiques keep the exemplary actor from hurting humans. But humans would definitely stop understanding what is happening and where civilisation is going[12].
2.3.3. Feasibility of model alignment through natural language critiques and refinement
Even though linguistic critique and refinement are often considered the most robust way of conforming someone’s plan, prediction, or explanation to some theory, it might be that this is still insufficient that the text result of critique and refinement actually conforms to the theory from the perspective of which the text result was criticised through language. Even though the alignment of multiple cognitive agents’ world models obviously consists at least in part of linguistic communication (Friston et al., 2022), this type of alignment might mainly pertain to relatively simple feedback on past and current behaviour rather than elaborate, “theoretical” critiques of elaborate plans and predictions, expressed in language. Understanding “language alignment” and studying its feasibility is the domain of frontier research in cognitive science, philosophy of language, and semantics.
2.3.4. Alignment tax in capability
As discussed in section 2.2.1, the exemplary actor will bear a significant alignment tax in inference latency. However, in section 4.5 I argue that this is probably not very important even in the context of cyber-warfare, let alone regular business situations.
In terms of capability (i.e., the quality of generated plans and predictions), whether the exemplary actor will be handicapped relative to a bare LLM depends on the “alien” dependencies between concepts and process patterns in the world model learned by the LLM and that are removed from the exemplary plans and predictions as “unjustified from the perspective of approved scientific theories” (see section 2.3.2)[13]. Even if these patterns are the result of LLM overfitting on the training text corpus, this overfitting could still “work better” than more regularised reasoning by the exemplary actor if humans “overfit” in the linguistic simulacra in the same way. In this case, LLM might gain an inexplicable predictive intuition in memetics, persuasion, fashion, economics, politics, social psychology, etc., that mathematical scientific theories in these fields will fail to match in predictive power (if such mathematical theories will even exist: there are doubts that robust predictive mathematical models in these fields could be built even in principle).
On the other hand, the regularisation of reasoning and inference with state-of-the-art theories in methodology (rationality, epistemology) and scientific disciplines should increase the accuracy and calibrated precision of the predictions and plans of the exemplary actor significantly relative to LMCAs that don’t iteratively critique and refine their plans and predictions with methodological and scientific theories. After all, we, humans invest in finding good explanations in science and methodology because we expect them to be useful, including during linguistic reasoning (and not only in engineering calculations). The benefit from this kind of (methodological and scientific) regularisation of reasoning should usually exceed the benefit of using “alien” patterns (even if these patterns are directionally correct on average, they could often fail unexpectedly), except, maybe, in these arenas of action mentioned above (memetic engineering, politics, social engineering) which may benefit disproportionately from the “alien intuition” and gain too little from “scientific regularisation”.
However, if an AI distils its “alien” models into formal math and science (albeit too complex for humans to understand, such as mathematical constructs with dozens of “moving parts”), check it, and use that theory to critique and refine its plans in the same manner as the exemplary actor would do with human-understandable theories, this form of “exemplary actor equipped with alien mathematised models” could probably beat the “exemplary actor with human-understandable mathematised models” in a wider range of situations. Therefore, we should monitor AI for not attempting to formalise its alien models.
The previous paragraph concerns all methodological and scientific disciplines with the exception of ethics. Exemplary ethical reasoning should look like voluntary quitting of the games where the agent couldn't find realistic options for avoiding “bad” Nash equilibria or to “win” in ways that would be morally acceptable. Technically, this is not a problem: this is the essence of what we want from an aligned AI. However, on a larger view, ethical alignment becomes a capability handicap if the environments in which the AI will act, i.e., the current economic and geopolitical systems, present themselves as exactly such “bad” games. Though this is not a technical problem with the exemplary actor’s architecture, but rather a risk of the larger project of deploying any (aligned) AI in the world, which I discuss in the next section.
2.4. Risks of the plan to build and deploy the exemplary actor
Apart from technical questions regarding the alignment of the exemplary actor with a given theory (described in a textbook), there are many other risks of the plan of building and deploying the exemplary actor. The two prominent “technical” risks are:
A scientific theory of ethics might indicate that humans are not optimal to keep around, i.e., that it is morally better to replace humans with some other conscious life form. Oops.
The scientific and methodological perspectives that should regularise the plans of the exemplary actor (math, logic, epistemology, rationality, ethics, cognitive science, game theory, control theory, theory of evolution, etc.) might fail to capture some sufficiently important aspects of the complexity of human values and/or behaviour, which means that humans and the exemplary actor will be “misaligned in general” (to a large enough degree that this causes some trouble) even if we are aligned on all these aforementioned (and other) disciplines. I’ve discussed this risk in this post. Although my intuition is that there is only a small risk that such misalignment will be significant enough to cause serious trouble or a “catastrophe”, this risk is non-zero.
Perhaps the most salient “non-technical” risk to me is that the environment (the economic, geopolitical, and social systems) in which the exemplary actor will find itself will not be “winnable” in an ethical way even with the best of its ability and reasoning (see section 2.3.4). I also recently discussed the importance of the right system incentives and instituting an “ethically winnable” system structure in this comment.
However, in this post, I don’t discuss these risks (as well as other risks of the plan to create the exemplary actor which I don’t mention) further because these risks apply equally to the LLM-based exemplary actor which I discussed so far and a GFlowNet actor within an H-JEPA agent, which I describe and discuss in sections 3 and 4.
This post has been originally published on LessWrong.