Motivation
I believe that the most important factor in whether our AI future goes broadly well or poorly is whether people quickly develop effective AI-ready (and AI-enabled) institutions and networks1. In that, I agree with the recent Séb Krier's essay "Maintaining agency and control in an age of accelerated intelligence".
Many academic groups, non-profit orgs (such as Collective Intelligence Project, AI Objectives Institute, Metagov, and Gaia Lab), and even some governmental agencies, such as Taiwan's Ministry of Digital Affairs are currently working on new AI-ready institutions. However, these projects will likely remain theoretical exercises or prototypes unless there is a population of AI-enabled agents (individuals and organisations) eager to coordinate and solve problems together. This is because agents and institutions need each other to develop and grow in capability and sophistication.
Thus, for new AI-enabled institutions to take root and develop, individuals and organisations have to be at least as AI-ready as these institutions.
Effective use of powerful AI and participation in new economic networks (such as stablecoin payments) obviously promises a lot of advantages to businesses. So, the AI modernisation of the business sphere is already well aligned with the standard economic incentives. It doesn't seem to me that this area needs any extra care or push on the margin.
However, for individuals, such incentives almost don't exist. Using AI tools at work is not the same as becoming a person ready to participate in AI-first social, political, and media networks (such as Jim Rutt's idea of the network of personal "information agents").
Currently, people mostly use siloed commercial AI apps from OpenAI, Google, Microsoft, and Perplexity. Although all these and other vendors will soon push agentic AI products aggressively, I suspect that these vendors will be reluctant to permit free exploration of the social or political agency of users because this could be politically risky for them and there is no commercial upside for them in doing this2. So, it's likely that big vendors' agents will keep interacting with the external world on behalf of the users in mostly mundane commercial ways ("plan my next holiday trip") rather than become true companions and faithful representatives of the people in social and political domains, such as setting up a date for the human, recommending a friend, representing the human in a political assembly, and enabling new types of collaboration between people.
From this I conclude that increasing the adoption of truly personal agents could be one of the highest-impact things to do on the margin to enable social, political, or media innovation.
If the above is not a sufficient argument, the wider adoption of personal agents has more positive effects and indirect arguments for working on it:
(1) It reduces the power imbalance between people and corporations: people save money that doesn't go to corporations as subscription revenue. There are no or fewer deplatforming risks, as well as the "traditional" risks of surveillance capitalism and behaviour manipulation pervasive in the so-called attention economy.
(2) Individual human intelligence and agency augmentation is almost by definition the most anti-gradual disempowerment and anti-intelligence curse agenda among various other AI safety and "AI for good" agendas.
(3) Making it easier for people to run their fully personal agents for non-commercial affairs (socialisation, politics, commoning) that should preferably stay this way is a non-market safety project, so we should expect it to be more neglected by default than market safety projects, such as improving AI robustness or steerability. Of course, to be actually useful and widely adopted, personal agents must be robust, steerable, have long memory, and other characteristics equally attractive for business AI agents, but these capabilities are already actively developed in the open source AI agent frameworks (driven by the business demand), so personal agents can leverage these capabilities without differentially pushing them much.
Finally, note that the increasing capability of open-weights LLMs and compute becoming cheaper over time will make personal agents even more relevant in the future because completely private agents will be enabled through the inference of open-weights models on rented GPUs, whereas today open-weights non-MoE models are not sufficiently robust as agents and are not sufficiently "deep" for thoughtful engagement with the human, which limits the practicality of such "completely private" setups (or greatly increases their costs, if someone is willing to host the largest DeepSeek, Qwen, or Llama models all on their own). Also, the increasing robustness of coding agents and DevOps agents (whether they are built over APIs or open-weights LLMs) will itself reduce the crucial barriers for the adoption and usage of personal agents, as I will discuss below.
Personal agents offer mundane value
The personal agents "movement" would descend ideologically from the self-hosted movement, which promotes and advocates for personal, private hosting of apps such as e-mail, calendar, task tracker, photos and file sharing instead of using free cloud services by Google, Microsoft, or Apple.
It's safe to say that the self-hosted movement has failed: it hasn't gained sufficient traction for about 20 years. I think this is because self-hosting of office apps doesn't actually provide any benefits beyond the ideological satisfaction and the reduction of poorly felt risks of deplatforming, hacking, or data leaks.
I'm convinced that good intentions and mundane benefits could go much farther than just good intentions, and AI adoption is no different. It's hopeless to promote personal AI agents that are "safer" or "more private" but otherwise are equally or less useful than the agents offered by big vendors.
Fortunately, I think personal agents have better a priori odds of being adopted than self-hosted productivity apps of the previous era because personal agents do offer immediate and tangible value over agents from big vendors:
Lower cost. Flat pricing (usually, 20$/mo) is inadequately high for most personal users, yet virtually all AI vendors employ flat pricing: from general platforms such as OpenAI, Google, Microsoft, and Perplexity, to personal AI tutors and psychotherapists such as Auren, to professional agents such as Shortwave and Cursor. I may want to talk to my AI psychotherapist just once per month and the cost of compute will be less than 10 cents. All personal agents can be run for just 2-3$/mo on data and app hosting + model inference API costs, which will be well below 10$/mo for most people across all their AI agents and apps.
International availability. OpenAI and many other AI vendors use Stripe for payment processing in limited configurations, so that people without Visa or MarsterCard cards cannot pay for their services. These are a lot of such people in developing countries.
Unified context and usage history (memory). People often talk to AIs across several different vendors, partially because they want to compare results from different base LLMs (and each big vendor ties up the apps with their models) and partially because no vendor offers all agents and apps that the users want. It's impossible to search, query, or reference this tapestry of usage traces. The personal agent platform eliminates this problem by storing all conversation and query history in a single memory layer such as Cognee. Of course, the user could also maintain context boundaries by attaching different agents and apps to different memory system instances.
Customisation. Want some agent to send you a notification every other day? The deep research agent to ignore results from a certain domain or author? Exchange certain information with your family's or friends' agents in specific situations? The coding agent should be able to do this quite reliably with a single prompt against the stock open-source version of the specific agent. By the end of 2025, coding agents should become so capable that non-programmers can rely on such customisation to work (and to warn them if the user asks for something suspicious) without knowing anything about the source code of the agent they want to customise.
Risks
I'm aware that the benefits of the personal agent platform that I mentioned above come with their own risks. For example, keeping all personal agents on a single hosting account increases the blast radius if this account is stolen. Or, the "vibe coding agent" can introduce a vulnerability into the code or simply break it in a saddle way.
These risks seem like the only notable downsides of personal agents, both as a personal choice and as an agenda. I'm still advocating for a wide adoption of personal agents because the benefits seem to outweigh the risks. I also expect that the state of mundane computer security and LLM security (such as against jailbreaks and prompt injection) will get better rather than worse in the next couple of years. If you think I'm wrong about either of those, please let me know.
The simplest and probably the most likely risk with the deployment of personal agents are not clever prompt injections on the web pages that the agent reads or a vulnerability introduced accidentally when the human asks the coding agent to customise another app or agent, but voluntary deployment of agents with malware, dowloaded from "agent sharing" websites (perhaps, the next generation of shareware website) or untrusted Github repositories.
It's simple to play an active positive role in mitigating this specific risk, as well as increasing the trust in personal agents overall and thus foster their adoption: create a directory of vetted agent repositories (and the specific versions and commits within them) and continuously scan them for vulnerabilities with SoTA AI for code security. This is economical to do this necessary work with resource pooling and perhaps public or institutional funding to reduce the risks for everyone.
Levers for fostering the adoption of personal agents
To summarise the above, here's how I see the main areas of work for helping personal agents spread:
(1) Make open source agents more capable and useful at their main tasks than the analogous agents from big vendors. Open-source agents are at a disadvantage because perhaps they will be using stock LLMs with prompting rather than LLMs post-trained specifically for the given agentic tasks. However, for most personal use cases, the difference may be small or non-existent, especially as the capabilities of the stock models increase.
(2) Help open-source agent development ecosystem flourish by reducing the barrier into this kind of development, though agent project templates, scaffolds, and the tested stack of infrastructure pieces (hosting platforms, databases, execution environments, etc.). AgentStack is an example of such a project, however, it's not focused on personal agents. Ideally, agent developers start themselves seek compatibility with the personal agents stack (platform, toolkit) because it will aid their distribution, while the personal agents platform benefits from a wider variety of supported agents.
(3) Eliminate or reduce the barriers for adoption of personal agents, both technical and financial/jurisdictional. This is a crucial learning from the failure of the self-hosted movement: non-programmers must be able to set up their own personal agents platform with deal-simple, short, step-by-step instructions, ideally just to the point of running the manager agent that walks the human through the rest of the process in a dialogue and helps the user to maintain, evolve, and customise their agents. This process should work robustly enough that it gains a reputation of "just working" among non-programmers. Homebrew comes to mind as the example of a project having such reputation.
Wrt. financial and jurisdictional barriers, reduce number of separate payments the humans needs to do and accounts to manage. OpenRouter, Requesty.ai, and nano-gpt.com do a great job at unifying LLM API bills (as well as enabling access), but none of them supports any embedding models (on the other hand, recently, embedding-based RAG falls increasingly out of favour among AI agent developers). LiteLLM does support embedding models, but doesn't onboard small customers yet.
Ideally, there should be a way to unify both model API and hosting bills, but unfortunately it doesn't currently seem to me that best hosting services for the personal agent platform (such as Fly.io) will be eager to enter this LLM proxy business because it will be an unnecessary risk for them. It would be awesome if Fly.io or a similar hosting service (Digital Ocean, Vercel, Render, etc.) proves me wrong.
So, probably two separate bills (and accounts) is the minimum achievable today (with embedding model inference rationed through Gemini Embedding API's free usage limit, for instance).
(4) Support distribution and discoverability of personal agent projects. Currently, it's surprisingly hard to even discover the coolest open-source agent projects on the block, despite them instantly amassing thousands of stars on Github. Perhaps, I don't hang around the right Discord channels or subreddits, but doing either of these things already sounds as a deal breaker if we aim for a really wide adoption. HuggingFace Spaces and theresanaiforthat.com might be good enough, so ideally these and similar platforms would add a tag for projects compatible with the personal agents toolkit.
(5) Create a directory of open-source agents scanned for malware and vulnerabilities (see the "Risks" section above) to minimise the chance of a major hack that can undermine people's trust in personal agents.
Gaia Network is one particular form of such network/institution that Rafael Kaufmann and collaborators in the Gaia Lab have been shaping up.
With a possible exception of Meta, whose positioning and business model writ large may, and ideally should be compatible with enablement of new social institutions. However, in practice, it's much more likely that Meta will move in the exact opposite direction: atomisation of people, which usually can be monetised more easily.